As far as I understood, the key concept in Event Sourcing (I’m using Greg Young’s words) is
that we have an Event Store holding the events to rebuild an object behind the domain as opposed to something storing the current state
Every time you delete or update something, Greg sais, you loose information. It makes sense to me.
Since “conceptually the Event Store is an infinitely appending file“, rebuilding of objects based on events can be problematic, and Greg explained an approach in using snapshots.
I’m everything but an expert in this topis, but that reminded me of revision control systems. I might be completely wrong, but I don’t see too many differences between persisting objects based on events (and sometimes snapshots) with Event Sourcing and committing changes to a revision control system.
Further on, that reminded me of a very nice article by Αριστοτέλης Παγκαλτζής I read about the key difference between git and all the other revision control systems (bolds are mine):
Among the systems I did look into, there are really just two contenders: Git and Mercurial. All the other systems track metadata; Git and hg just track content and infer the metadata.
By tracking metadata I mean that these systems keep a record of what steps were taken. “This file had its name changed.” “Those modifications came from that file in that branch.” “This file was copied from that file.” Tracking content alone means doing none of that. When you commit, the VCS just records what the tree looks like. It doesn’t care about how the tree got that way. When you ask it about two revisions, it looks at the tree beforehand and the tree afterwards, and figures out what happened inbetween. A file is not a unit that defines any sort of boundary in this view. The VCS always looks at entire trees; files have no individual identity separate from their trees at all.
As a consequence, whether you used VCS tools to manipulate your working copy or regular command line utilities or applied a patch or whatever is irrelevant. The resulting history is always the same.
Another consequence, at least with Git, is that it can track the movement of things smaller than a file, e.g. a single function being moved from one file to another.
And that sub-file level tracking in Git is an example of how, if the VCS is improved and its tracking becomes more intelligent, your entire repository instantly benefits from this. A metadata tracking system can’t do that because the old part of your repository didn’t have the necessary metadata recorded. A file-based VCS can’t do that because it doesn’t have an innate understanding that there are interrelationships between files.
So that’s why the only contenders are Git and Mercurial.
Now I wonder: can be storing metadata (with a RCS) compared to persisting events with Event Sourcing? If so, since git’s magic capabilities to “figuring out what happened” just because it stores snapshots and not diffs, could be always storing snapshots a better way to do Event Sourcing?
@gianmarcog suggested me a very interesting post by Linus Torvalds about git tracking “_nothing_ but information” rather than events happened to the source code, which I find amazingly interesting, especially if I try to read it thinking about event sourcing. Thanks, @gianmarcog