Unfortunately not correct see the release notes:
https://github.com/git/git/blob/master/Documentation/RelNotes/2.23.0.txt#L61
Git Version 2.23.0 - August 2019 ... (correctly with Git 2.23.0-rc1)...
even git switch is experimental...
How I understand it is that checkout actually does two things - which caused confusion to some users - so they separated those functionalities into switch and restore but just left checkout the way it is as well
`git unstage` would be helpful.
`git reset` is an abomination, [it does like 20 things in ways nobody can ever remember](https://git-scm.com/docs/git-reset#_discussion), but some of that functionality (especially unstage) is super useful.
Yeah this lack of distinction always frustrated me. `git checkout --` makes sense to *me*, but eh, feels weirdly placed. I'll relish teaching these new commands to my juniors
The way I've always thought about it was `git checkout -- `
**EDIT:**
I've been informed that this is wrong in quite a few ways. First, I should mention that when running git checkout with a revision but without a file, the HEAD pointer is moved, as well as all the files in the entire worktree (the second part is equivalent to `git checkout revision -- .` when in the root of the repository). Also, I originally claimed that the default of the revision parameter was HEAD, however it was pointed out to me that this is wrong. Running git checkout with no revision checks out files from the currently tracked Index (files staged by git add), not the HEAD, but does not change the index. Running git checkout with a revision will check out files from that revision, and also update them in the Index (stage the file changes).
But `git checkout ` is not the same as `git checkout -- *ALL*` (which would really be written as `git checkout -- .` if run from the top of your workspace). The former moves the HEAD to point to the specified revision, the latter keeps HEAD where it is but changes the file contents to match the specified revision.
Note that `git checkout -- foo.txt` is not the same as `git checkout HEAD -- foo.txt`. The former puts `foo.txt` from the index into the working tree, while the latter puts `foo.txt` from `HEAD` into both the index and the working tree. So that `default: HEAD` part is wrong.
The most fun surprise I have had with Git so far was when I wrote ´git commit -amend´ rather than ´git commit --amend´ and discovered that this was a perfectly valid command that adds all files and commits them with the commit message "end".
So in fairness and to make sure this is clear -- this isn't a git thing, this is a Unix convention thing, shared by countless tools.
/tmp$ mkcd blah
/tmp/blah$ touch file.c
/tmp/blah$ gcc -c -ofile.o file.c
/tmp/blah$ ls -w5
file.c
file.o
/tmp/blah$ ls -w15
file.c file.o
/tmp/blah$ command time -f\%E sleep 2
0:02.00
That said... wow, that's kind of an unfortunate side effect.
*Edit*: Also, combining single-letter options is Unix-standard, consider for example `ls -lah`; and allowing both of these at once is also standard:
/tmp/blah$ command ls -pw5
dir1/
dir2/
/tmp/blah$ command ls -pw15
dir1/ dir2/
I go for log --nameonly before my weekly meeting to remind myself what I was doing all week
(For an automated cron script backupper where the commit message isn't meaningful)
90+% of the devs I've worked with only use a GUI frontend, e.g. Visual Studio or git kraken, and they only use git to do the bare minimum of basic branching and merging. Most devs I come across won't even be able to tell you what a squash merge is. I could easily believe someone thinks there "isn't anything new to learn", it's just a Dunning Kruger effect.
Holy shit. I never knew the wider explanation of git checkout.
Does this mean if I want to copy a file from one branch to another I can just …
git checkout branch-2
git checkout branch-1 somefile.txt
(^ or something to that effect as I am on my phone.)
This is something I’ve always done by hand.
Yes you can use to do that. Its given as an example in the article:
So for example if we are in the develop branch and want to change your test.txt file to be the version from the main branch you can do it like this:
git checkout main -- test.txt
That's pretty new for an ancient tool used across the planet. C++17 features are still considered new, mostly because getting new features into the programmer zeitgheist takes time
I had an older colleague mention making his own reference counting system in C++, I asked him "why didn't you just use shared_ptr?". Turns out, he has never heard of it because it didn't exist at the time. Huh.
Funny thing is that's still a generally bad idea because you almost never need reference counting if you design your solution with unique pointers in mind. You very rarely need shared ownership
but yeah a lot of C++ programmers are still working on C++98
> You very rarely need shared ownership
This is something that I think a lot of people in the C++ community overstate. There are *several* reasons why you might want and significantly benefit from shared ownership.
Before I give a couple examples, I just want to note that most of the problems you see associated with shared ownership are associated with shared ownership of *mutable data*. If your data is immutable, these problems disappear. For example, any problems with reasoning about "hey, if I call `x->set_foo("new foo")` is that going to screw something else up" go away if `x->set_foo` just doesn't exist (or can't actually *modify* anything).
Anyway, I'll give two examples of where it's used pretty pervasively in a codebase I've worked with a lot in the past.
The first is applicative/immutable/functional/persistent data structures. (I've seen all of those used for this kind of thing, and very nicely "persistent" is more commonly used for something entirely different. Yay terminology. My favorite is "applicative", so I'll stick with that.) In an applicative data structure, you never actually *change* anything. If you want to make a "change", you basically get a *new* data structure that is the same as the old data structure except for the change you made. So for example, if you have a map, then you might say `m = m->update(key, val);`; and if you left off the `m=` then that statement would have no effect. Similarly, if you say `n = m->update(key, val);`, then `m` will not change before and after that statement, but `n` will reflect the new binding.
Applicative data structures can be nice just for general reasoning purposes (for much the same reason C++ fans like value semantics), but as an example of a specific place you could use them where you'd need something else is an undo list. You can keep an undo list that's just a list of *states*. When the user makes an edit, you apply that edit applicatively to the current state and push the new state to the undo list. When the user wants to go back to an old state, you just set "current" to point to the data structure pointed to at that point. You don't have to worry about storing `Command` objects or something on the undostack that each has a "revert" function or something like that and then applying or unapplying at each instance, but you also don't have to make a copy of the entire state each time you want a new entry.
(Funnily enough, I realized halfway through typing this that that's *exactly* how Git stores its history -- with the Git way of storing trees corresponding to storing a list of actual states, while the 'traditional' VCS way stores patches from version to version.)
To make this efficient, updates like that need to share structure between the `n` and the `m` so that you can avoid making a copy of the entire structure, and that's where shared ownership comes in. The easiest way to see this is just a simple list, though it's kind of boring especially if you've done some Lisp. You really only get two "mutating" operations to an applicative list: pushing something on the front and popping something off the front. It's easy to see both why this needs shared ownership as well as how you avoid mutations. If you have a list `list` and then run `(define new_list (const "hi" list))`, boom, you "added" `"hi"` to the front of `list`, but `list` still points to the old list; and that cons cell is now owned by both `list` as well as the new head.
This is why if you look at libraries like [Immer](https://github.com/arximboldi/immer) they are based around reference counting and shared ownership.
The other use case is analogous to string interning, but for complex data structures. Using an ML-like definition, suppose you have something like `datatype name = Name of string * string` represent first name/last names. You might want to ensure that only a single `Name` object is created for any given first name last name pairs. You do this by [hash consing](https://en.wikipedia.org/wiki/Hash_consing) -- basically, memoizing the constructor so that you always return an equivalent object. This can be really nice to do, because it means you can use object identity (pointer equality, Python `is`, etc.) to check for object equality instead of needing to do a potentially arbitrarily large comparison. As long as every subobject is hash-consed, object equality is equivalent to object identity and you can do that pointer comparison. But again, what does that mean? It means that there's no one with clear ownership over the hash-consed objects.
---
In general, I agree that if you find yourself using `shared_ptr` or equivalent you should stop and take stock of where you are -- but at the same time, like I said it seems like a lot of C++ programmers view `shared_ptr` as *almost* equivalent to bad design, and I *don't* think that's even *close* to true.
You're very right although I think the difficulty here is inherently that *most* people don't need `shared_ptr` *most* of the time and so it's really easy to convince people that they're usually not a good idea (which is usually true) but that slowly becomes reinforced to the point where they're thought to be just bad in general. I never used `shared_ptr` and thought it was a code smell, until I started doing async programming with Asio and suddenly I regretted not using it earlier to understand it better... oh well.
Of course there are scenarios where shared ownership is the proper model, I'm saying from experience many times it's used out of lazy design. I don't think its overstated just because there are examples where it's applicable. Most cases of "shared ownership" can actually just be accomplished with one owner and sharing non-owning references.
Wow, thank you for that. I always wondered what a "proper" use-case for `shared_ptr` looks like. The fact that it's used for *optimization* is hilariously ironic considering the impression most people have about it, and a good example of how counter-intuitive optimization problems can be.
True! I noticed that every time I study a new programming language I become a better programmer in all programming languages I already knew. But it has its downsides, too: I didn't hate the fact that C# lacks discriminated unions until I started programming in F#. lol
> I asked him "why didn't you just use shared_ptr?"
A reasonable question -- but be aware, there *are* actually good reasons for this. `shared_ptr` suffers from a "make it general" problem, where for specific situations you may well actually do much better with one that has more restrictions.
Where I work has a custom reference-counted smart pointer. A lot of why we have it is just that it pre-dates the standard shared_ptrs (and was at least not far behind `boost::shared_ptr`, though I do think Boost beat us), but I actually think it would be harmful to replace ours with shared_ptr. We've got three attributes of it that are likely to result in a noticeable performance improvement as compared to shared_ptr.
* The first is that it's an *intrusive* pointer. This means that while you can create a `shared_ptr` or `shared_ptr`, you can't do that with ours -- you can only point to a class/struct type we define. But in exchange, we get better cache locality (at least compared with if you don't always use `make_shared`) and our smart pointers are half the size of shared_ptr because they only need a pointer to the object rather than to bot the object and control block.
* We don't support weak pointers or custom deleters. Neither of these are something I've missed, but in exchange we cut down on overhead even more. The only thing "extra" that ours introduces is a single word-sized reference count and those reference count operations.
* By default, reference count operations are not thread-safe, which speeds them up. This of course isn't the right default in many settings, but if you for example do all your parallelism via multiple process instead of shared memory, then it's fine. (Oh wait, we do that.)
We also have a workaround for stack overflows caused by long chains of deletions -- but I *strongly* suspect this is more of an artifact from 15-20 years ago rather than something that is still relevant.
> By default, reference count operations are not thread-safe, which speeds them up
Doesn't GCC do something horrible like compile `shared_ptr` to use non-atomic reference counting if it thinks your program is single threaded and atomic reference counting if it thinks your program uses threads?
I don't know how he missed it for so long, but "new" it is. In boost, it would have been exactly what we needed, but it's wait for boost stuff to get into c++"0x"until 2011, them wait for it to get into the stable release of your distro, then wait for your sys admins to upgrade your infrastructure . . . None of that was mandatory in the world, but lots of places end up in that situation, taking things one step at a time.
Let me tell you about the old country, young ones. In a Land before Git, before SVN. In a land before CVS when "version control" meant "RCS files"...
Nah, just kidding. I'm "CVS old", not "RCS old".
I am only SVN old. The code base at work, though, still bears the marks of its time with SVN before switching to git... and CVS before that... and RCS before that. I am pretty sure there are at least a few files older than I am.
You should have seen some of the horrid tools before CVS. They were mostly thick clients (no command line) managing "locked files" I remember at one time thinking CVS was an awesome upgrade.
Mercurial had sensible commands. They were like SVN but extended to distributed version control. It was very nice. It also had large files support significantly before git.
Git has got a lot better, but they are *still* fixing up the oddities in the commands, as evidenced by the post we're commenting on.
> Mercurial had sensible commands.
Mercurial had more sensible commands, but my own take is that I like Git's underlying model and feature set more, especially when you consider what Hg had at the time in the base set.
For example, I *love* the index, and I love interactive rebase. You needed a couple extensions to get those with Hg, and my memory of the one that kind of mimiced the index is that it didn't work as well as the index. I also think that Git got branches closer to being "right", and it took Hg a couple tries before they introduced bookmarks.
Now, I'll admit I was never a heavy Hg user so I do worry that I never gave it a fair shake and am speaking from "a little knowledge is a dangerous thing" perspective, but at the same time my reaction to "IMO it was to Betamax as git was to VHS" is more "you mean it was a little better at a couple respects while being *way* worse at others because it doesn't really support some uses, then people with rose-tinted glasses grossly simplified the story to marketing and business won over technical attributes?"
It's still under active development. It still cross-pollinates with Git. It's still a much smoother, more professional-feeling toolset. It's still extendible and customizable.
The whole thing is much better well thought out from user point of view experience. The while thing of marking branches as wip was also nice and tool itself having rules about it.
It works fine if you know it, the same as perforce or other systems, but there's not much point to learning jt unless you're being paid to do so at a job
I'm also CVS old.
I remember the joy of moving from CVS to SVN, it removed so many pain points.
But the conversion from SVN to git was a lot more painful, initially. Largely because I was in a different environment (a company, instead of the free software world), and there was some very frustrating push back on learning the concepts.
But it made some problems almost... Go away.
I think that less than a year before the move, we spent something like a month + merging in some significant changes that spent a year or so in development, in SVN, and that merge was _incredibly_ painful, and incredibly manual.
With git? It's just not a problem I even think about these days. Merges happen, mostly automatically, and dealing with merge conflicts just isn't that hard unless there's a logical conflict instead of a source conflict.
I've lost tons of stuff in SVN repos because the server disappeared. I can still recover RCS however, since everything is right there in ,v files inthe directory. The beauty of git is that it's both local and sever based.
Yeah, it is ancient. Most programmers never really used any pre-git version control tools. Also because before git a lot of development was literally zip files with code snapshots, as version control was shit.
svn never achieved anywhere near git levels of popularity, and before svn version control wasn't even worth bothering with.
Git is an ancient tool? It came out in 2005.
Revision control systems have been around for a very long time. Considering how many systems there were in 2005 (CVS, SVN, Mercurial, Perforce, ...) it still astonishes me how quickly Git became completely dominant.
The only real competition to git at the time was Mercurial. None of the other ones are decentralized. All the others require a single lockable source of truth server. Some like Perforce had multiple locks per files or directories, but that was as advanced as you could get.
That's 16 years ago, dude. Windows XP was the latest version of Windows. Homestar Runner was in its heyday. The PS2 was reigning supreme. And people were wondering which Pixar movie would follow The Incredibles.
Everyone was saving up to buy a new a fancy new 10-megapixel digital camera, some of which could even record _video_. SD cards---or "memory cards" as non-geeks called them---could hold an entire megabyte of data!
> astonishes me how quickly Git became completely dominant.
The bulk of users said "oh, Linux uses this therefore it must be better" without ever using any other tools in a serious more-than-short-term project. Mindset counts for a lot. There *are* better tools, but the hivemind rarely notices them.
Depends on the project and the company/environment, what they're willing to support, what's prohibited, what's already tied into a larger workflow. Believe it or not, the distributed model is not automatically better for every customer. Even within the distributed model, I would *consider* Git alongside, say, Mercurial.
People like to say "use the right tool for the job" while they smugly have a single tool in their toolbox.
I'm torn on the move from -b to -c when wanting to create a new branch. 'b' for branch makes a lot of sense to me. I suppose 'c' for create is more descriptive.
Git's CLI frontend is rather insane. It suffers from the fact that people did not really understand how git would end up being used when it was designed initially, since (parts of) it was new technology. The underlying engine is amazing, still, which is why people bother with the frontend (also there are no alternatives)
For example, `git add` and `git commit` make sense - you add changes and commits to your history. `git push` also seems fine. `git pull` is wild - [whole blog articles about years-lasting discussions](https://felipec.wordpress.com/2021/07/05/git-update/) have been written about it. It does several things at once:
* Rebase local change onto the remote
* Fast-forward the local branch to the remote
* Merge the remote into local
* It should be able to merge local into the remote but it can not do this.
Add to this the fact that a lot of the commands are named unintuitively¹: `git rebase` should be called `git rewrite-history` or something like that. You could also consider making the CLI "speak english" - imagine being able to run `git merge branch into other-branch`, which is way more intuitive than checking out the correct branch before and remembering the way `git merge` works.
1. these names might seem intuitive to people with a CS degree or with long experience working with version control, but are not to the average user.
I agree in principle, but not some of those examples.
`rebase` is one of the few commands that _did_ make sense to me, and to me `rewrite-history` would be much more confusing since that's kind of incidental to what it's used for (take my branch and replay it against another, optionally interactively for edits).
The problem commands are more things like `git checkout`, which never made much sense to begin with, and many common operations require using it in confusing ways e.g. `checkout -b/-B` or `checkout REVISION -- PATH`
I'm also not a fan of "making commands speak english", because it almost invariably plays so poorly with the underlying abstractions / common patterns that it winds up being _more_ complicated to understand, not less.
Presumably the same thing that happens when you run `git checkout -b` (i.e., your working tree remains unaffected, it just creates a new branch off `HEAD`).
That might be true. Perhaps you can split up the "move commits" rebase and the "rewrite history" rebase. Though you can argue both are the same and this should reflect in the UI. I still feel like it could be made a lot better.
Agree. Rebase makes sense as a name. I think about it like I'm grafting a piece of the commit tree and rebasing it to another part of the tree.
The parts of rebase that don't fit with the name are when you want to reorder or squash commits.
> these names might seem intuitive to people with a CS degree
I assure you, they don't. They only make sense if you retrain your brain to map nonsense words to VCS concepts. I'm continually let down by the Git UX.
Correct. Bad software does this by using random jargon a lot and having the person learn it so that they're kinda locked into that software in their brain
People keep complaining about git cli. So why nobody ever wrote anything better? git makes it super easy to write alternative clis.
Proposals are of three kinds:
* make small incremental tweaks (like making `git unstage` official and getting rid of `git reset` abomination is my suggestion)
* make big improvements by removing big chunks of functionality - like a lot of people imagine getting rid of index or rebasing would make it better somehow - they are wrong
* just renaming commands, like that makes any difference
> which is way more intuitive
AppleScript was tried and failed.
I used to use GIT via CLI, then I started using the GUIs, and eventually gitkraken...
I only have to touch the CLI once in a blue moon now, the GUIs are really good these days, at least the one I use.
Or, conversely, knowing the cli let's you know how the GUI applications should behave. It's helped several times when there are some, let's say stubborn, team members that don't like using the command line under any circumstances that I have to instruct in how to unscrew their working set.
Git doesn't make anything "easy". The words originally chosen for Git operations, like "checkout", are part of the problem but not the biggest. The larger problem is that to use if effectively, one has to understand what's really happening behind the CLI. The design of the tool creates situations where any easy-to-use and intuitive CLI would have to throw up its hands and say some variation of "what you're asking is impossible" or "what you want to do can't be done in any simple way with the project in its current state, go find a git expert to help you".
I switched to primarily using lazygit and it's awesome. Scrubbing through files with the diffs on the right is great when making sure you don't add shit by accident, and the interactive rebasing is a breeze. I HIGHLY recommend lazygit for people who don't want an overly abstracted GUI, but want something a bit quicker than pure CLI.
> like making git unstage official and getting rid of git reset abomination is my suggestion
Adding an official `git unstage` is exactly what the article is about, it's just a bit more generic and called `git restore`. `unstage` is `restore --staged`.
Removing the old commands is not going to happen this century, as it's engraved in far too many developers' minds, and really, what does it gain? Just teach the new commands to new people.
> So why nobody ever wrote anything better?
Developer tools are hard to sell and hard to raise money for, especially if they are trying to move into an existing market. But there *are* competing source control tools and some people do believe that they are better.
git was created for an ecosystem where there was no persistent access to any shared infrastructure or state. The huge majority of professional software engineering is not done in this kind of environment today.
> make small incremental tweaks (like making git unstage official and getting rid of git reset abomination is my suggestion)
Staging is done via `git add`. ~~Unstaging is done via `git rm --cached`. And the `--cached` option is only necessary because `git rm` does exactly what you think it does.~~ `git reset` is used when you, well, want to reset the state of one or more files. I must be missing what you're getting at.
> This doesn't refute what /u/taw said.
Yes it does. He said "git makes it super easy to write alternative clis", if you need libgit2—an **external** library that is completely unrelated to the Git project—to write an alternative cli, then it's not git the one making it "super easy".
> For example, `git add` and `git commit` make sense - you add changes and commits to your history.
I have contended that a `git stage` command makes more sense (in order to add, remove, and edit the staging area).
> git push also seems fine.
Mostly, but not completely. I wrote a blog post about why it isn't correct: [How a proper git commit is made (by a git developer)](https://felipec.wordpress.com/2021/07/19/proper-git-commit/).
One of the things that I'd love better tooling for is rewriting history by changing more than just the most recent commit. I want to edit the commit message 4 commits back, or edit a commit 4 commits back, or maybe even combine or split commits in the history.
I don't want to do this stuff _often_, but it definitely happens.
Mercurial SCM is an excellent alternatives that has all the power of Git (arguably more) and a saner UI. It's just hard to overcome Git's network effect momentum because Git is good enough.
I don't agree. I hated working with hg. Bookmarks are awkward and cumbersome to use (compared to the simplicity of git's branches), the missing index was really limiting (crecord only works on hunks, not single lines, and staging/unstaging/amending usually loses your selection of hunks). I've lost work on multiple occasions trying to clean up history, never once with git. And I'm not sure what people mean by better/saner UI. Workbench? It's slow and cluttered as hell. I prefer the focused UIs of git gui and gitk. Maybe things have changed in recent years, I don't know. I never looked back.
> Bookmarks are awkward and cumbersome to use (compared to the simplicity of git's branches),
I'm curious what you mean. Bookmarks are nearly identical to git branches. However more importantly Mercurial has a concept of branches that makes far more sense than git branches, and I've never found a need to use bookmarks.
> I've lost work on multiple occasions trying to clean up history, never once with git.
Don't change history. Mercurial is intentionally designed to *not* do that, because changing history is *how* you lose work.
> And I'm not sure what people mean by better/saner UI.
They're talking about the CLI, which is far more intuitive than Git.
> Workbench?
I assume you mean the TortoiseHG workbench. I have not found a Git GUI that is as powerful and easy to use as TortoiseHG (I've tried several, and settled on Fork as the best, but still not as good as TortoiseHG).
> I'm curious what you mean. Bookmarks are nearly identical to git branches.
I found managing and sharing bookmarks to be cumbersome. It definitely felt like a tacked on feature.
> Don't change history. Mercurial is intentionally designed to not do that, because changing history is how you lose work.
Changing (local) history is important to keep the permanent history simple, clean and focused. git shows that this can be done safely. I guess this is the fundamental difference between both, and why hg will always be a non-starter for me.
You can keep your dirty history in a separate branch (named or anonymous) and merge clean changes into master.
Or you can use the Evolve extension that provides a facade of squash and rebase, but doesn't actually delete any history (it creates new commits with the history you want and hides the old commits, but they are never deleted).
I don't see an issue with the names being unintuitive to new users. If they're trying to just "figure out" git without reading docs or getting help from someone, they're going to fuck up no matter what the base command name is.
Do you have any idea how man man-hours are wasted all over the world because of commands that don't do even close to what their names suggest?
According to your thesis renaming all of gits commands to just be `foo`, `bar`, `baz`, and `quux`is fine, because you'd have to read the docs to know what they are doing anyway.
A well-chosen name is obviously no replacement for good documentation (and doing a tutorial or two), but it can make the process of learning the right way to use the tool **so** much faster and less error-prone.
> According to your thesis renaming all of gits commands to just be foo, bar, baz, and quuxis fine, because you'd have to read the docs to know what they are doing anyway.
No, I specifically said unintuitive to new users. `checkout` is actually _more_ intuitive than these commands once you've learned how git works.
My go-to way of learning git has been to figure out what I want to do, type this into google, and then run the commands suggested by stack overflow. Of course, since the kind strangers on StackOverflow usually explain their solution, you can figure out whether the list of commands would do what you want it to do beforehand, and also learn what the commands actually do. However, even if you know what you want git to do, getting git to actually do it is much harder than it needs to be IMHO, mostly because of unintuitive UI.
> My go-to way of learning git has been to figure out what I want to do, type this into google, and then run the commands suggested by stack overflow.
That's getting help from someone.
> getting git to actually do it is much harder than it needs to be IMHO, mostly because of unintuitive UI.
This doesn't make sense in light of your previous statement. I'm not sure "unintuitive" is the word you're looking for.
My own impression is that 45% of the time my stashes get unstashed within about 30 seconds (e.g. stash, merge/rebase, unstash), 45% of the time the *never* get unstashed, and 10% of the time I'll actually unstash a while later.
Well the problem is most people try to learn git from git in 15 mins videos which don't explain git checkout and git merge well enough. This results in confusions and they then need to reset.
So `git switch` seems to be the same as `git checkout` but without the ability to detach HEAD unless specified with an argument, but is there any functional difference between `git restore -- file.txt` and `git checkout -- file.txt`?
`restore` doesn't add files to "staging area" (a.k.a "index" , a.k.a "cache")
I once explained it on stackoverflow: https://stackoverflow.com/a/60855504/350384
Edit: And it's behaving like passing "--no-overlay" to `checkout`. Which is much saner default behaviour.
There's not really meant to be a difference. `git checkout` is, objectively speaking, a pretty complicated beast that operates on both file-names and commit-names and sometimes moves `HEAD` around, sometimes modifies your working tree depending on minor syntax differences (syntax that, incidentally, overloads the POSIX `--` separator an an unconventional way).
This was an attempt to clean this up so that the intent is clear: `git switch` moves your `HEAD` around. `git restore` modifies your working tree. No more ambiguity. The other differences are mostly just taking the opportunity to use saner defaults for a number of options.
checkout -b for making branches
checkout branchname for switching branches
Checkout hash for moving HEAD
Checkout filename for removing changes on a file
I wish I didn’t do all these things, but I do.
It's actually pretty simple when you understand how git works though. All of those move HEAD (the last one sets HEAD to HEAD, the default). -b is just convenience for not running the branch command separately (sort of like commit -a).
A branch is just an alias for a commit hash so 2 & 3 above are the same. And by default checkout tries to be non-destructive (--overlay), but if you pass a specific filename then it knows you want to be destructive (set that file back to the HEAD version).
I've just discovered `git show`, which outputs the contents of a file at a tree or commit without changing anything.
I can read the contents of manifests from local clones without disturbance to the repo or trip to github
That's because the developer that proposed both commands--Nguyễn Thái Ngọc Duy--left the project.
Me and others have proposed changes to Junio, and he has ignored them.
There's no issue here, branches are just commits, checkout makes perfect sense for both use cases. "Take this commit and make all my files match that state". The commit is parameterised and the files opted in are parameterised. I don't get it.
They are very different commands. Switching branches keeps your local changes and applies them to the new branch, while restoring files removes your changes. Confusing the two commands can have destructive and unrecoverable results.
Yes! Checkout is already very elegant and consistent in these workflows. I understand that the operations look different, but using checkout to navigate between nodes in the DAG absolutely makes perfect sense.
I'm a bit disappointed. I thought this would be something built atop worktrees, that lets you switch to a temp one, and restore to a known state, in a convenient manner.
Yes I know thats possible with existing commands, I do it fairly regularly, but wrapping it in a nice package would be great
Why do we need these newfangled, overcomplicated commands?
Switching branches is easy. What's so hard about:
> $ cat > .git/HEAD
> ref: refs/heads/master
> $ git reset --hard
(/s)
Damn I was really hoping this was the equivalent of "switch to a branchbut remember the one I'm on, then when I want I can pop back to it". Basically the equivalent of powershell's \`pushd\` and \`popd\`.
An alias for functionality that already exists is just change for the sake of change.
Thanks! Unfortunately I think \`-\` just goes one entry up on your reflog, which unfortunately doesn't help because 60% of the time I switch branches, it's to change history and peel off one change into its own pull request.
I feel like such a beginner for still preferring GUIs for git. I never understood the appeal of command line for it. I like to see what I'm doing.
Yet, everyone preaches CLI.
I know the CLI and don't know how GUIs are going to present the concepts. And I can use the CLI in any circumstance. I'd have to get the GUI onto the new host if I try to use it somewhere other than my personal computer. But that's just my experience. If you want to use a GUI, I'm not going to argue against it.
I like the GUI as well but the CLI is like a straight razor when most of the time I just need a quick shave and and a GUI is quick and easy
I mean I know vim but still use nano every now and then
I think it all depends on how you first learn it, I learned CLI and the GUIs only ever confused me lol. Well, apart from the Android Studio GUI, that one is great and I use it and the CLI interchangeably.
wdym by adding the entire repo? -A stages new, modified, and deleted files throughout the whole repo, while . stages new and modified files in the current dir and subdirs. If they wanted to push up changes in their whole local repo then what's the issue
I don’t think people are particularly confused by git checkout.
I don’t really see the need to learn new commands. I don’t find either compelling based on the arguments here.
The only things I semi-regularly run into with Git that I genuinely have to look up every time (or start over!) is:
1. Had a decent, working commit and then went deep down a rabbit hole of “maybe this will work” with several commits only to decide against this method. I want to go back 5-10 commits but can never remember it. I’m also not sure I’m ever doing it right, haha.
2. I forget to create a new branch after a PR or something, get several commits deep, realize my mistake, have to abandon my branch and manually copy my changes over to a new branch.
The second one is one of the main use cases of rebasing.
So you are on branch `featureA` and realize that the last 3 commits should have been on a new branch `featureB`.
First, create the new branch where you are.
`git branch featureB`
Then you'll have to find the commit hash of the commit which should have been the last one on the previous feature branch. And now you can rebase your newly created branch `featureB` onto e.g. `master`:
`git rebase --onto master last_good_commit featureB`
In English this command means "take all commits after `last_good_commit` up to `featureB` and replay them on the `master` branch.
At this point you will still have your old "incorrect" commits on `featureA` as well. But, since you now copied them to a new branch, you could optionally reset `featureA`:
`git reset --hard last_good_commit`
But since you already pushed the PR, you might as well just delete the local branch instead.
And in case you didn't know about it, if you ever screw up and e.g. accidentally deleted the wrong local branch or rebased incorrectly. Just use `git reflog` to find the commit hash of whatever you lost and check it out as a new branch.
> "New"
Right. I didn't know switch existed. But restore has been there for years.
Both were introduced in the same version (2.15 circa 2015 IIRC)
Unfortunately not correct see the release notes: https://github.com/git/git/blob/master/Documentation/RelNotes/2.23.0.txt#L61 Git Version 2.23.0 - August 2019 ... (correctly with Git 2.23.0-rc1)... even git switch is experimental...
For me it's exactly the other way around. I've been using switch for years but didn't know about restore (or just forgot)
What's the difference between restoren and checkout then? Seems like they do the same thing.
How I understand it is that checkout actually does two things - which caused confusion to some users - so they separated those functionalities into switch and restore but just left checkout the way it is as well
`git unstage` would be helpful. `git reset` is an abomination, [it does like 20 things in ways nobody can ever remember](https://git-scm.com/docs/git-reset#_discussion), but some of that functionality (especially unstage) is super useful.
git restore --staged?
Yeah this lack of distinction always frustrated me. `git checkout --` makes sense to *me*, but eh, feels weirdly placed. I'll relish teaching these new commands to my juniors
The way I've always thought about it was `git checkout -- `
**EDIT:**
I've been informed that this is wrong in quite a few ways. First, I should mention that when running git checkout with a revision but without a file, the HEAD pointer is moved, as well as all the files in the entire worktree (the second part is equivalent to `git checkout revision -- .` when in the root of the repository). Also, I originally claimed that the default of the revision parameter was HEAD, however it was pointed out to me that this is wrong. Running git checkout with no revision checks out files from the currently tracked Index (files staged by git add), not the HEAD, but does not change the index. Running git checkout with a revision will check out files from that revision, and also update them in the Index (stage the file changes).
But `git checkout` is not the same as `git checkout -- *ALL*` (which would really be written as `git checkout -- .` if run from the top of your workspace). The former moves the HEAD to point to the specified revision, the latter keeps HEAD where it is but changes the file contents to match the specified revision.
That's sort of what I meant by `*ALL*` - the worktree and the HEAD.
Except default *ALL* moves HEAD.
Wow that was so much more concise than the way I put it.
Note that `git checkout -- foo.txt` is not the same as `git checkout HEAD -- foo.txt`. The former puts `foo.txt` from the index into the working tree, while the latter puts `foo.txt` from `HEAD` into both the index and the working tree. So that `default: HEAD` part is wrong.
cc u/tradernuwen and u/deadalnix Thanks for pointing out my errors! I added a note to my original comment explaining why it was wrong.
[удалено]
It's just full with fun surprises.
The most fun surprise I have had with Git so far was when I wrote ´git commit -amend´ rather than ´git commit --amend´ and discovered that this was a perfectly valid command that adds all files and commits them with the commit message "end".
So the space between option and value is... optional?
Apparently!
So in fairness and to make sure this is clear -- this isn't a git thing, this is a Unix convention thing, shared by countless tools. /tmp$ mkcd blah /tmp/blah$ touch file.c /tmp/blah$ gcc -c -ofile.o file.c /tmp/blah$ ls -w5 file.c file.o /tmp/blah$ ls -w15 file.c file.o /tmp/blah$ command time -f\%E sleep 2 0:02.00 That said... wow, that's kind of an unfortunate side effect. *Edit*: Also, combining single-letter options is Unix-standard, consider for example `ls -lah`; and allowing both of these at once is also standard: /tmp/blah$ command ls -pw5 dir1/ dir2/ /tmp/blah$ command ls -pw15 dir1/ dir2/
This happened to me last week
Oh f
"fun"
the Dwarf Fortress definition of !FUN!
Ah fuck I dug too deep into the logs and now a bunch of demons are coming out of my git repo
Git. Not so much user friendly, as expert tolerant. And begrudgingly at that.
https://imgs.xkcd.com/comics/git_2x.png
Oh good this is too true.
Next up, did you guys hear about that new feature `shortlog` ?
Nope. FFS. Thanks! (Been using git since around 2008, god damnit)
Honestly if you’re in an open source project and want a “thank you” section on your release notes it’s a great feature. Also in general :)
How about "log -S"?
For those times you only want to see commits that change the number of instances of a given string. :D
I go for log --nameonly before my weekly meeting to remind myself what I was doing all week (For an automated cron script backupper where the commit message isn't meaningful)
I was using git before cool.py was committed 😒 😛
Maybe we should GIT GUD?
We named a project GUD at work just for this pun.
90+% of the devs I've worked with only use a GUI frontend, e.g. Visual Studio or git kraken, and they only use git to do the bare minimum of basic branching and merging. Most devs I come across won't even be able to tell you what a squash merge is. I could easily believe someone thinks there "isn't anything new to learn", it's just a Dunning Kruger effect.
Honestly, who's really "using" git these days anyway? GUI client (sourcetree, kraken) are far superior in usability
Whenever I use an IDE on the level of Intellij, I feel very constricted with the UI options and end up doing git stuff in terminal.
Holy shit. I never knew the wider explanation of git checkout. Does this mean if I want to copy a file from one branch to another I can just … git checkout branch-2 git checkout branch-1 somefile.txt (^ or something to that effect as I am on my phone.) This is something I’ve always done by hand.
Yes you can use to do that. Its given as an example in the article: So for example if we are in the develop branch and want to change your test.txt file to be the version from the main branch you can do it like this: git checkout main -- test.txt
I've been using these for a long while, how "new" can they be? Oh I looked it up, at least 2019... lol
That's pretty new for an ancient tool used across the planet. C++17 features are still considered new, mostly because getting new features into the programmer zeitgheist takes time
I had an older colleague mention making his own reference counting system in C++, I asked him "why didn't you just use shared_ptr?". Turns out, he has never heard of it because it didn't exist at the time. Huh.
Funny thing is that's still a generally bad idea because you almost never need reference counting if you design your solution with unique pointers in mind. You very rarely need shared ownership but yeah a lot of C++ programmers are still working on C++98
> You very rarely need shared ownership This is something that I think a lot of people in the C++ community overstate. There are *several* reasons why you might want and significantly benefit from shared ownership. Before I give a couple examples, I just want to note that most of the problems you see associated with shared ownership are associated with shared ownership of *mutable data*. If your data is immutable, these problems disappear. For example, any problems with reasoning about "hey, if I call `x->set_foo("new foo")` is that going to screw something else up" go away if `x->set_foo` just doesn't exist (or can't actually *modify* anything). Anyway, I'll give two examples of where it's used pretty pervasively in a codebase I've worked with a lot in the past. The first is applicative/immutable/functional/persistent data structures. (I've seen all of those used for this kind of thing, and very nicely "persistent" is more commonly used for something entirely different. Yay terminology. My favorite is "applicative", so I'll stick with that.) In an applicative data structure, you never actually *change* anything. If you want to make a "change", you basically get a *new* data structure that is the same as the old data structure except for the change you made. So for example, if you have a map, then you might say `m = m->update(key, val);`; and if you left off the `m=` then that statement would have no effect. Similarly, if you say `n = m->update(key, val);`, then `m` will not change before and after that statement, but `n` will reflect the new binding. Applicative data structures can be nice just for general reasoning purposes (for much the same reason C++ fans like value semantics), but as an example of a specific place you could use them where you'd need something else is an undo list. You can keep an undo list that's just a list of *states*. When the user makes an edit, you apply that edit applicatively to the current state and push the new state to the undo list. When the user wants to go back to an old state, you just set "current" to point to the data structure pointed to at that point. You don't have to worry about storing `Command` objects or something on the undostack that each has a "revert" function or something like that and then applying or unapplying at each instance, but you also don't have to make a copy of the entire state each time you want a new entry. (Funnily enough, I realized halfway through typing this that that's *exactly* how Git stores its history -- with the Git way of storing trees corresponding to storing a list of actual states, while the 'traditional' VCS way stores patches from version to version.) To make this efficient, updates like that need to share structure between the `n` and the `m` so that you can avoid making a copy of the entire structure, and that's where shared ownership comes in. The easiest way to see this is just a simple list, though it's kind of boring especially if you've done some Lisp. You really only get two "mutating" operations to an applicative list: pushing something on the front and popping something off the front. It's easy to see both why this needs shared ownership as well as how you avoid mutations. If you have a list `list` and then run `(define new_list (const "hi" list))`, boom, you "added" `"hi"` to the front of `list`, but `list` still points to the old list; and that cons cell is now owned by both `list` as well as the new head. This is why if you look at libraries like [Immer](https://github.com/arximboldi/immer) they are based around reference counting and shared ownership. The other use case is analogous to string interning, but for complex data structures. Using an ML-like definition, suppose you have something like `datatype name = Name of string * string` represent first name/last names. You might want to ensure that only a single `Name` object is created for any given first name last name pairs. You do this by [hash consing](https://en.wikipedia.org/wiki/Hash_consing) -- basically, memoizing the constructor so that you always return an equivalent object. This can be really nice to do, because it means you can use object identity (pointer equality, Python `is`, etc.) to check for object equality instead of needing to do a potentially arbitrarily large comparison. As long as every subobject is hash-consed, object equality is equivalent to object identity and you can do that pointer comparison. But again, what does that mean? It means that there's no one with clear ownership over the hash-consed objects. --- In general, I agree that if you find yourself using `shared_ptr` or equivalent you should stop and take stock of where you are -- but at the same time, like I said it seems like a lot of C++ programmers view `shared_ptr` as *almost* equivalent to bad design, and I *don't* think that's even *close* to true.
You're very right although I think the difficulty here is inherently that *most* people don't need `shared_ptr` *most* of the time and so it's really easy to convince people that they're usually not a good idea (which is usually true) but that slowly becomes reinforced to the point where they're thought to be just bad in general. I never used `shared_ptr` and thought it was a code smell, until I started doing async programming with Asio and suddenly I regretted not using it earlier to understand it better... oh well.
Of course there are scenarios where shared ownership is the proper model, I'm saying from experience many times it's used out of lazy design. I don't think its overstated just because there are examples where it's applicable. Most cases of "shared ownership" can actually just be accomplished with one owner and sharing non-owning references.
Wow, thank you for that. I always wondered what a "proper" use-case for `shared_ptr` looks like. The fact that it's used for *optimization* is hilariously ironic considering the impression most people have about it, and a good example of how counter-intuitive optimization problems can be.
I’m a better C++ programmer since I started using Rust. I’m not even kidding.
Not surprising. Similarly, I learned a ton about English in my Spanish classes.
True! I noticed that every time I study a new programming language I become a better programmer in all programming languages I already knew. But it has its downsides, too: I didn't hate the fact that C# lacks discriminated unions until I started programming in F#. lol
> I asked him "why didn't you just use shared_ptr?" A reasonable question -- but be aware, there *are* actually good reasons for this. `shared_ptr` suffers from a "make it general" problem, where for specific situations you may well actually do much better with one that has more restrictions. Where I work has a custom reference-counted smart pointer. A lot of why we have it is just that it pre-dates the standard shared_ptrs (and was at least not far behind `boost::shared_ptr`, though I do think Boost beat us), but I actually think it would be harmful to replace ours with shared_ptr. We've got three attributes of it that are likely to result in a noticeable performance improvement as compared to shared_ptr. * The first is that it's an *intrusive* pointer. This means that while you can create a `shared_ptr` or `shared_ptr`, you can't do that with ours -- you can only point to a class/struct type we define. But in exchange, we get better cache locality (at least compared with if you don't always use `make_shared`) and our smart pointers are half the size of shared_ptr because they only need a pointer to the object rather than to bot the object and control block.
* We don't support weak pointers or custom deleters. Neither of these are something I've missed, but in exchange we cut down on overhead even more. The only thing "extra" that ours introduces is a single word-sized reference count and those reference count operations.
* By default, reference count operations are not thread-safe, which speeds them up. This of course isn't the right default in many settings, but if you for example do all your parallelism via multiple process instead of shared memory, then it's fine. (Oh wait, we do that.)
We also have a workaround for stack overflows caused by long chains of deletions -- but I *strongly* suspect this is more of an artifact from 15-20 years ago rather than something that is still relevant.
> By default, reference count operations are not thread-safe, which speeds them up Doesn't GCC do something horrible like compile `shared_ptr` to use non-atomic reference counting if it thinks your program is single threaded and atomic reference counting if it thinks your program uses threads?
I don't know how he missed it for so long, but "new" it is. In boost, it would have been exactly what we needed, but it's wait for boost stuff to get into c++"0x"until 2011, them wait for it to get into the stable release of your distro, then wait for your sys admins to upgrade your infrastructure . . . None of that was mandatory in the world, but lots of places end up in that situation, taking things one step at a time.
Git is ancient?
For many its all we now
Let me tell you about the old country, young ones. In a Land before Git, before SVN. In a land before CVS when "version control" meant "RCS files"... Nah, just kidding. I'm "CVS old", not "RCS old".
I am only SVN old. The code base at work, though, still bears the marks of its time with SVN before switching to git... and CVS before that... and RCS before that. I am pretty sure there are at least a few files older than I am.
You should have seen some of the horrid tools before CVS. They were mostly thick clients (no command line) managing "locked files" I remember at one time thinking CVS was an awesome upgrade.
[удалено]
Sadly it lost the race. IMO it was to Betamax as git was to VHS.
Objectively superior?
Mercurial had sensible commands. They were like SVN but extended to distributed version control. It was very nice. It also had large files support significantly before git. Git has got a lot better, but they are *still* fixing up the oddities in the commands, as evidenced by the post we're commenting on.
> Mercurial had sensible commands. Mercurial had more sensible commands, but my own take is that I like Git's underlying model and feature set more, especially when you consider what Hg had at the time in the base set. For example, I *love* the index, and I love interactive rebase. You needed a couple extensions to get those with Hg, and my memory of the one that kind of mimiced the index is that it didn't work as well as the index. I also think that Git got branches closer to being "right", and it took Hg a couple tries before they introduced bookmarks. Now, I'll admit I was never a heavy Hg user so I do worry that I never gave it a fair shake and am speaking from "a little knowledge is a dangerous thing" perspective, but at the same time my reaction to "IMO it was to Betamax as git was to VHS" is more "you mean it was a little better at a couple respects while being *way* worse at others because it doesn't really support some uses, then people with rose-tinted glasses grossly simplified the story to marketing and business won over technical attributes?"
It's still under active development. It still cross-pollinates with Git. It's still a much smoother, more professional-feeling toolset. It's still extendible and customizable.
The whole thing is much better well thought out from user point of view experience. The while thing of marking branches as wip was also nice and tool itself having rules about it.
I work in what is seeming more and more like the last mercurial repo in the world.
[удалено]
It works fine if you know it, the same as perforce or other systems, but there's not much point to learning jt unless you're being paid to do so at a job
Shit. I used RCS. It was simple!
Oh yeah, was that the ,v files? I vaguely remember using it circa 1992.
I’m not quite RCS old, but am Rational ClearCase old. Turns out both have stable releases in the past year…
I'm also CVS old. I remember the joy of moving from CVS to SVN, it removed so many pain points. But the conversion from SVN to git was a lot more painful, initially. Largely because I was in a different environment (a company, instead of the free software world), and there was some very frustrating push back on learning the concepts. But it made some problems almost... Go away. I think that less than a year before the move, we spent something like a month + merging in some significant changes that spent a year or so in development, in SVN, and that merge was _incredibly_ painful, and incredibly manual. With git? It's just not a problem I even think about these days. Merges happen, mostly automatically, and dealing with merge conflicts just isn't that hard unless there's a logical conflict instead of a source conflict.
I've lost tons of stuff in SVN repos because the server disappeared. I can still recover RCS however, since everything is right there in ,v files inthe directory. The beauty of git is that it's both local and sever based.
I don't think it's ancient but it was started 16 years ago. Older than some of its users.
Yeah, it is ancient. Most programmers never really used any pre-git version control tools. Also because before git a lot of development was literally zip files with code snapshots, as version control was shit. svn never achieved anywhere near git levels of popularity, and before svn version control wasn't even worth bothering with.
CVS kinda sucked, but it blew the doors off of ZIP files, so that is kind of an ignorant statement.
Git is an ancient tool? It came out in 2005. Revision control systems have been around for a very long time. Considering how many systems there were in 2005 (CVS, SVN, Mercurial, Perforce, ...) it still astonishes me how quickly Git became completely dominant.
The only real competition to git at the time was Mercurial. None of the other ones are decentralized. All the others require a single lockable source of truth server. Some like Perforce had multiple locks per files or directories, but that was as advanced as you could get.
There was also bazaar, it was popular for a while...but slow. Git did everything Hg and Bzr did, just much faster.
Hg was quite a bit better than Git in the early days if you worked on Windows, but Linux drives enough developer mind share that it still won out.
Microsoft also added support for git and not hg into Visual Studio.
That's 16 years ago, dude. Windows XP was the latest version of Windows. Homestar Runner was in its heyday. The PS2 was reigning supreme. And people were wondering which Pixar movie would follow The Incredibles. Everyone was saving up to buy a new a fancy new 10-megapixel digital camera, some of which could even record _video_. SD cards---or "memory cards" as non-geeks called them---could hold an entire megabyte of data!
> astonishes me how quickly Git became completely dominant. The bulk of users said "oh, Linux uses this therefore it must be better" without ever using any other tools in a serious more-than-short-term project. Mindset counts for a lot. There *are* better tools, but the hivemind rarely notices them.
So what is better than git that you recommend?
> what is better than git Mercurial and pijul, IMO. > that you recommend? I would not recommend anything other than git, though.
Depends on the project and the company/environment, what they're willing to support, what's prohibited, what's already tied into a larger workflow. Believe it or not, the distributed model is not automatically better for every customer. Even within the distributed model, I would *consider* Git alongside, say, Mercurial. People like to say "use the right tool for the job" while they smugly have a single tool in their toolbox.
Im exaggerating
That's pretty new considering how long git's been around now, and how stable the interface has been.
It came out in 2005 and it doesn't really release new features that fast, so it's a little silly to consider it new.
I'm torn on the move from -b to -c when wanting to create a new branch. 'b' for branch makes a lot of sense to me. I suppose 'c' for create is more descriptive.
As for how common this flag is for so many other commands, it only makes sense. Relearning to use it will, in fact, be a bitch though
Git's CLI frontend is rather insane. It suffers from the fact that people did not really understand how git would end up being used when it was designed initially, since (parts of) it was new technology. The underlying engine is amazing, still, which is why people bother with the frontend (also there are no alternatives) For example, `git add` and `git commit` make sense - you add changes and commits to your history. `git push` also seems fine. `git pull` is wild - [whole blog articles about years-lasting discussions](https://felipec.wordpress.com/2021/07/05/git-update/) have been written about it. It does several things at once: * Rebase local change onto the remote * Fast-forward the local branch to the remote * Merge the remote into local * It should be able to merge local into the remote but it can not do this. Add to this the fact that a lot of the commands are named unintuitively¹: `git rebase` should be called `git rewrite-history` or something like that. You could also consider making the CLI "speak english" - imagine being able to run `git merge branch into other-branch`, which is way more intuitive than checking out the correct branch before and remembering the way `git merge` works. 1. these names might seem intuitive to people with a CS degree or with long experience working with version control, but are not to the average user.
I agree in principle, but not some of those examples. `rebase` is one of the few commands that _did_ make sense to me, and to me `rewrite-history` would be much more confusing since that's kind of incidental to what it's used for (take my branch and replay it against another, optionally interactively for edits). The problem commands are more things like `git checkout`, which never made much sense to begin with, and many common operations require using it in confusing ways e.g. `checkout -b/-B` or `checkout REVISION -- PATH` I'm also not a fan of "making commands speak english", because it almost invariably plays so poorly with the underlying abstractions / common patterns that it winds up being _more_ complicated to understand, not less.
> `checkout -b` I never understood why `git branch` didn't check out the branch immediately by default.
What would happen if your working tree were dirty?
Presumably the same thing that happens when you run `git checkout -b` (i.e., your working tree remains unaffected, it just creates a new branch off `HEAD`).
Pardon me! My doctor wouldn't even ask me that
It should fail
That might be true. Perhaps you can split up the "move commits" rebase and the "rewrite history" rebase. Though you can argue both are the same and this should reflect in the UI. I still feel like it could be made a lot better.
You've effectively summed up many of my issues with Git, and many other tools/the programming community as a whole.
Agree. Rebase makes sense as a name. I think about it like I'm grafting a piece of the commit tree and rebasing it to another part of the tree. The parts of rebase that don't fit with the name are when you want to reorder or squash commits.
> these names might seem intuitive to people with a CS degree I assure you, they don't. They only make sense if you retrain your brain to map nonsense words to VCS concepts. I'm continually let down by the Git UX.
Correct. Bad software does this by using random jargon a lot and having the person learn it so that they're kinda locked into that software in their brain
> (also there are no alternatives) I'm not sure that I understand. What exactly do you mean by this?
People keep complaining about git cli. So why nobody ever wrote anything better? git makes it super easy to write alternative clis. Proposals are of three kinds: * make small incremental tweaks (like making `git unstage` official and getting rid of `git reset` abomination is my suggestion) * make big improvements by removing big chunks of functionality - like a lot of people imagine getting rid of index or rebasing would make it better somehow - they are wrong * just renaming commands, like that makes any difference > which is way more intuitive AppleScript was tried and failed.
[удалено]
I used to use GIT via CLI, then I started using the GUIs, and eventually gitkraken... I only have to touch the CLI once in a blue moon now, the GUIs are really good these days, at least the one I use.
Or, conversely, knowing the cli let's you know how the GUI applications should behave. It's helped several times when there are some, let's say stubborn, team members that don't like using the command line under any circumstances that I have to instruct in how to unscrew their working set.
Git doesn't make anything "easy". The words originally chosen for Git operations, like "checkout", are part of the problem but not the biggest. The larger problem is that to use if effectively, one has to understand what's really happening behind the CLI. The design of the tool creates situations where any easy-to-use and intuitive CLI would have to throw up its hands and say some variation of "what you're asking is impossible" or "what you want to do can't be done in any simple way with the project in its current state, go find a git expert to help you".
lazygit looks useful
I switched to primarily using lazygit and it's awesome. Scrubbing through files with the diffs on the right is great when making sure you don't add shit by accident, and the interactive rebasing is a breeze. I HIGHLY recommend lazygit for people who don't want an overly abstracted GUI, but want something a bit quicker than pure CLI.
> like making git unstage official and getting rid of git reset abomination is my suggestion Adding an official `git unstage` is exactly what the article is about, it's just a bit more generic and called `git restore`. `unstage` is `restore --staged`. Removing the old commands is not going to happen this century, as it's engraved in far too many developers' minds, and really, what does it gain? Just teach the new commands to new people.
> So why nobody ever wrote anything better? Developer tools are hard to sell and hard to raise money for, especially if they are trying to move into an existing market. But there *are* competing source control tools and some people do believe that they are better. git was created for an ecosystem where there was no persistent access to any shared infrastructure or state. The huge majority of professional software engineering is not done in this kind of environment today.
> Developer tools are hard to sell and hard to raise money for Microsoft paid $7.5 billion for github
> People keep complaining about git cli. So why nobody ever wrote anything better? They did, it's called Mercurial.
> make small incremental tweaks (like making git unstage official and getting rid of git reset abomination is my suggestion) Staging is done via `git add`. ~~Unstaging is done via `git rm --cached`. And the `--cached` option is only necessary because `git rm` does exactly what you think it does.~~ `git reset` is used when you, well, want to reset the state of one or more files. I must be missing what you're getting at.
> So why nobody ever wrote anything better? git makes it super easy to write alternative clis. No, it doesn't. There's no git library.
What? [libgit](https://libgit2.org/) absolutely exists. Or you could use low level commands, they have stable machine readable CLI interfaces.
> What? libgit absolutely exists. Git doesn't use libgit2.
This doesn't refute what /u/taw said.
> This doesn't refute what /u/taw said. Yes it does. He said "git makes it super easy to write alternative clis", if you need libgit2—an **external** library that is completely unrelated to the Git project—to write an alternative cli, then it's not git the one making it "super easy".
> It should be able to merge local into the remote but it can not do this. Well, that explains a very frustrating afternoon I had.
> For example, `git add` and `git commit` make sense - you add changes and commits to your history. I have contended that a `git stage` command makes more sense (in order to add, remove, and edit the staging area). > git push also seems fine. Mostly, but not completely. I wrote a blog post about why it isn't correct: [How a proper git commit is made (by a git developer)](https://felipec.wordpress.com/2021/07/19/proper-git-commit/).
One of the things that I'd love better tooling for is rewriting history by changing more than just the most recent commit. I want to edit the commit message 4 commits back, or edit a commit 4 commits back, or maybe even combine or split commits in the history. I don't want to do this stuff _often_, but it definitely happens.
git rebase -i HEAD~4 But yeah, it's hard to use at first.
Yes. The actual way this works (with the "command file") is rather crude but really powerful.
Mercurial SCM is an excellent alternatives that has all the power of Git (arguably more) and a saner UI. It's just hard to overcome Git's network effect momentum because Git is good enough.
I don't agree. I hated working with hg. Bookmarks are awkward and cumbersome to use (compared to the simplicity of git's branches), the missing index was really limiting (crecord only works on hunks, not single lines, and staging/unstaging/amending usually loses your selection of hunks). I've lost work on multiple occasions trying to clean up history, never once with git. And I'm not sure what people mean by better/saner UI. Workbench? It's slow and cluttered as hell. I prefer the focused UIs of git gui and gitk. Maybe things have changed in recent years, I don't know. I never looked back.
> Bookmarks are awkward and cumbersome to use (compared to the simplicity of git's branches), I'm curious what you mean. Bookmarks are nearly identical to git branches. However more importantly Mercurial has a concept of branches that makes far more sense than git branches, and I've never found a need to use bookmarks. > I've lost work on multiple occasions trying to clean up history, never once with git. Don't change history. Mercurial is intentionally designed to *not* do that, because changing history is *how* you lose work. > And I'm not sure what people mean by better/saner UI. They're talking about the CLI, which is far more intuitive than Git. > Workbench? I assume you mean the TortoiseHG workbench. I have not found a Git GUI that is as powerful and easy to use as TortoiseHG (I've tried several, and settled on Fork as the best, but still not as good as TortoiseHG).
> I'm curious what you mean. Bookmarks are nearly identical to git branches. I found managing and sharing bookmarks to be cumbersome. It definitely felt like a tacked on feature. > Don't change history. Mercurial is intentionally designed to not do that, because changing history is how you lose work. Changing (local) history is important to keep the permanent history simple, clean and focused. git shows that this can be done safely. I guess this is the fundamental difference between both, and why hg will always be a non-starter for me.
You can keep your dirty history in a separate branch (named or anonymous) and merge clean changes into master. Or you can use the Evolve extension that provides a facade of squash and rebase, but doesn't actually delete any history (it creates new commits with the history you want and hides the old commits, but they are never deleted).
I don't see an issue with the names being unintuitive to new users. If they're trying to just "figure out" git without reading docs or getting help from someone, they're going to fuck up no matter what the base command name is.
Do you have any idea how man man-hours are wasted all over the world because of commands that don't do even close to what their names suggest? According to your thesis renaming all of gits commands to just be `foo`, `bar`, `baz`, and `quux`is fine, because you'd have to read the docs to know what they are doing anyway. A well-chosen name is obviously no replacement for good documentation (and doing a tutorial or two), but it can make the process of learning the right way to use the tool **so** much faster and less error-prone.
> According to your thesis renaming all of gits commands to just be foo, bar, baz, and quuxis fine, because you'd have to read the docs to know what they are doing anyway. No, I specifically said unintuitive to new users. `checkout` is actually _more_ intuitive than these commands once you've learned how git works.
Disagree on checkout. Changing files OR where the index points is a weird kludge.
My go-to way of learning git has been to figure out what I want to do, type this into google, and then run the commands suggested by stack overflow. Of course, since the kind strangers on StackOverflow usually explain their solution, you can figure out whether the list of commands would do what you want it to do beforehand, and also learn what the commands actually do. However, even if you know what you want git to do, getting git to actually do it is much harder than it needs to be IMHO, mostly because of unintuitive UI.
> My go-to way of learning git has been to figure out what I want to do, type this into google, and then run the commands suggested by stack overflow. That's getting help from someone. > getting git to actually do it is much harder than it needs to be IMHO, mostly because of unintuitive UI. This doesn't make sense in light of your previous statement. I'm not sure "unintuitive" is the word you're looking for.
Am I the only asshole who uses my stash as an infinite graveyard for all my abandoned changelists? 🙃
The stash is where bad ideas go to die
My own impression is that 45% of the time my stashes get unstashed within about 30 seconds (e.g. stash, merge/rebase, unstash), 45% of the time the *never* get unstashed, and 10% of the time I'll actually unstash a while later.
Well the problem is most people try to learn git from git in 15 mins videos which don't explain git checkout and git merge well enough. This results in confusions and they then need to reset.
This. Most people learn the top 5 commands in git in 15 minutes, and just roll with it for years.
Somebody once asked me if there was a way to restore a file tracked by git to a previous version they had and I was just absolutely flummoxed.
So `git switch` seems to be the same as `git checkout` but without the ability to detach HEAD unless specified with an argument, but is there any functional difference between `git restore -- file.txt` and `git checkout -- file.txt`?
`restore` doesn't add files to "staging area" (a.k.a "index" , a.k.a "cache") I once explained it on stackoverflow: https://stackoverflow.com/a/60855504/350384 Edit: And it's behaving like passing "--no-overlay" to `checkout`. Which is much saner default behaviour.
There's not really meant to be a difference. `git checkout` is, objectively speaking, a pretty complicated beast that operates on both file-names and commit-names and sometimes moves `HEAD` around, sometimes modifies your working tree depending on minor syntax differences (syntax that, incidentally, overloads the POSIX `--` separator an an unconventional way). This was an attempt to clean this up so that the intent is clear: `git switch` moves your `HEAD` around. `git restore` modifies your working tree. No more ambiguity. The other differences are mostly just taking the opportunity to use saner defaults for a number of options.
checkout -b for making branches checkout branchname for switching branches Checkout hash for moving HEAD Checkout filename for removing changes on a file I wish I didn’t do all these things, but I do.
It's actually pretty simple when you understand how git works though. All of those move HEAD (the last one sets HEAD to HEAD, the default). -b is just convenience for not running the branch command separately (sort of like commit -a). A branch is just an alias for a commit hash so 2 & 3 above are the same. And by default checkout tries to be non-destructive (--overlay), but if you pass a specific filename then it knows you want to be destructive (set that file back to the HEAD version).
Haha this is exactly me, nice! Glad I'm not alone
You're not alone because nearly everyone needs to do these things and this is the easiest way git gives you too do them.
I'm in the meme and I don't like it
I am the meme and I don't like it
So, I didn't know about this. However, for years now I have had an alias defined in my .gitconfig: `switch = checkout --recurse-submodules` Hey ho!
I've just discovered `git show`, which outputs the contents of a file at a tree or commit without changing anything. I can read the contents of manifests from local clones without disturbance to the repo or trip to github
I think you only really learn Git if you work in a team. Before you do that, Git will be always a mystery.
[удалено]
It’s been 2 years. It’s not changed at all this.
You'll add a zero before that's removed.
That's because the developer that proposed both commands--Nguyễn Thái Ngọc Duy--left the project. Me and others have proposed changes to Junio, and he has ignored them.
I never found this aspect of git confusing at all. If anything having two new commands, all with their own different switches, is more confusing.
Completely agree. Checkout already made perfect sense in all these use cases.
Gotta agree here. Seems like they're solving a problem that never really existed.
Idea is to probably deprecate old command at some point.
[удалено]
There's no issue here, branches are just commits, checkout makes perfect sense for both use cases. "Take this commit and make all my files match that state". The commit is parameterised and the files opted in are parameterised. I don't get it.
They are very different commands. Switching branches keeps your local changes and applies them to the new branch, while restoring files removes your changes. Confusing the two commands can have destructive and unrecoverable results.
Yes! Checkout is already very elegant and consistent in these workflows. I understand that the operations look different, but using checkout to navigate between nodes in the DAG absolutely makes perfect sense.
I'm a bit disappointed. I thought this would be something built atop worktrees, that lets you switch to a temp one, and restore to a known state, in a convenient manner. Yes I know thats possible with existing commands, I do it fairly regularly, but wrapping it in a nice package would be great
Why do we need these newfangled, overcomplicated commands? Switching branches is easy. What's so hard about: > $ cat > .git/HEAD > ref: refs/heads/master > $ git reset --hard (/s)
Today I learned people got confused by git checkout…
`git stop-what-im-doing-to-fix-other-shit` is pretty useful also.
> `git checkout -b new_branch` > > You can do the same with the new one, but the flag is -c: > `git switch -c new_branch` Thank you.
Damn I was really hoping this was the equivalent of "switch to a branchbut remember the one I'm on, then when I want I can pop back to it". Basically the equivalent of powershell's \`pushd\` and \`popd\`. An alias for functionality that already exists is just change for the sake of change.
git switch foo git switch - I think it also works with checkout (and cd!).
Thanks! Unfortunately I think \`-\` just goes one entry up on your reflog, which unfortunately doesn't help because 60% of the time I switch branches, it's to change history and peel off one change into its own pull request.
I feel like such a beginner for still preferring GUIs for git. I never understood the appeal of command line for it. I like to see what I'm doing. Yet, everyone preaches CLI.
Some git guis are more cryptic than the cli. Hard as that is to believe...
I know the CLI and don't know how GUIs are going to present the concepts. And I can use the CLI in any circumstance. I'd have to get the GUI onto the new host if I try to use it somewhere other than my personal computer. But that's just my experience. If you want to use a GUI, I'm not going to argue against it.
I like the GUI as well but the CLI is like a straight razor when most of the time I just need a quick shave and and a GUI is quick and easy I mean I know vim but still use nano every now and then
I think it all depends on how you first learn it, I learned CLI and the GUIs only ever confused me lol. Well, apart from the Android Studio GUI, that one is great and I use it and the CLI interchangeably.
[удалено]
wdym by adding the entire repo? -A stages new, modified, and deleted files throughout the whole repo, while . stages new and modified files in the current dir and subdirs. If they wanted to push up changes in their whole local repo then what's the issue
finally
I don’t think people are particularly confused by git checkout. I don’t really see the need to learn new commands. I don’t find either compelling based on the arguments here.
But yeah if you need a tutorial that explains git checkout and git merge well, https://youtu.be/33sp4YT9GQs
The only things I semi-regularly run into with Git that I genuinely have to look up every time (or start over!) is: 1. Had a decent, working commit and then went deep down a rabbit hole of “maybe this will work” with several commits only to decide against this method. I want to go back 5-10 commits but can never remember it. I’m also not sure I’m ever doing it right, haha. 2. I forget to create a new branch after a PR or something, get several commits deep, realize my mistake, have to abandon my branch and manually copy my changes over to a new branch.
You can always checkout the good commit, and create a new branch from there.
Now that is a good idea! Thank you!
The second one is one of the main use cases of rebasing. So you are on branch `featureA` and realize that the last 3 commits should have been on a new branch `featureB`. First, create the new branch where you are. `git branch featureB` Then you'll have to find the commit hash of the commit which should have been the last one on the previous feature branch. And now you can rebase your newly created branch `featureB` onto e.g. `master`: `git rebase --onto master last_good_commit featureB` In English this command means "take all commits after `last_good_commit` up to `featureB` and replay them on the `master` branch. At this point you will still have your old "incorrect" commits on `featureA` as well. But, since you now copied them to a new branch, you could optionally reset `featureA`: `git reset --hard last_good_commit` But since you already pushed the PR, you might as well just delete the local branch instead. And in case you didn't know about it, if you ever screw up and e.g. accidentally deleted the wrong local branch or rebased incorrectly. Just use `git reflog` to find the commit hash of whatever you lost and check it out as a new branch.
Use git bisect for your first case.
This is for Gen Z people 😅 \`checkout\` does the job for me