Continued.
- The presenter in glasses said "shards" and "instances" were different, claimed he would explain the difference, then failed to do it and segued to the second presenter *

He mentioned the graph database storing the state of a single shard that is seeded when a new shard is created. That would mean no communication between shards, apart from their version of background simulation (economy, reputation etc.). Just like it is done by ED, I think.
- It is not clear if players can hop shards. IMHO they cannot. Once in a shard, always in the shard, unless they are able to extract part of a shard data relevant to a player and transfer it to another graph database responsible for another shard. This could be tricky. Not technically, but game-logic wise. I won't speculate more on this topic, watching further.
- "Entity graph" as an "evolution of iCache". Not sure what it means. iCache was supposed to be a cashing layer speeding up queries, not the central store for the global state. Unless it begun as a small indexing structure and then overtook more and more information about the game world. Nobody knows what it was anyway.
- Graph databases responsible for shards are also "sharded". This has nothing to do with the sharding of the game world. In database systems "sharding" means a database itself is distributed between multiple nodes, but looks like a single entity from the outside. If they are using a distributed graph database system, the list of candidates could be googled by someone who knows more about this topic. I only know that they exist
- The second presenter seems to be right about a graph db being a good match for their use case. It makes sense to me, intuitively. What he says about multi-mutation transactions (all changes succeed or all fail) also makes sense but is very basic. Generally he explains those details very well, but at the same time what he is saying has nothing to do with the viability of the entire solution and its scalability. He explains common sense practices, nothing groundbreaking. Kudos for clean delivery.
- Another confirmation of Kafka (or something similar delivered by AWS) being used for the "replication layer" can be found when he is talking about a "persistent queue". This is exactly it - every consumer is "subscribed" to topics they are interested in and they can resume consumption at any point in the stream, even independently replaying parts of it to catch up in case of failure. Again, common sense in similar systems.
- More on seeding of a shard graph database. He mentioned that the seeding also is done by the replication layer. There must be a "facsimile" of a fresh shard data stored somewhere. Intuitively, this means that the facsimile is stored in cheaper permanent storage as, you guessed it, a series of ordered messages. Seeding means that these messages are "replayed" in order to the newly created shard database. If they want to modify the facsimile (because they added something to the world), they add more messages to it. A very clean pattern. And yeah, this is how it has been done for years.
- Correction on the shard assignment. There is logic to shard assignment and they have a way of transferring a player between shards, through the central uber-graph and "stowed" entities. Again, this raises interesting issues about the game's logic because it allows for items to be removed from one shard and appear in another. Say "bye bye" to the coffee cup you left on Magda only to return to it a week later. You will be assigned to another shard in which it will not exist anymore. Another player will be able to find it, though. Mission partially accomplished, I guess.
- All the issues of "dense" situations, server authority etc remain. Sharding will help when players are distributed over a larger space but will not do much for dense situations, like a close-distance space battle. There is no mention of the "old" dynamic server meshing (distributed octree as it is in the Aether Engine) that could help in those cases.
- I think it means that
@WotGTheAgent was right. This is it, there won't be much more. Instead of trying to implement dreams.txt without realising they had no viable way of doing it, CIG finally reached for common sense and clean, well established patterns in distributed data systems. With all their limitations. They will push for larger shards as far as they can and call it a day.
The sad thing? They could have implemented what they are showing on this video years ago.
* Or not. He actually meant that "instance" was the same as an "AWS instance". It contains all the data for a game session and when it dies, its 50-player universe dies with it. This is not the same as what "instance" usually means in MMOs. Depending on the source, instances and shards will mean more or less the same - a separate copy of the game world, or part of it, that contains its own, independent state and a limited number of players who can interact with each other.