Hehe, I am generally aware of the concepts* and to be honest I'd rather that the term "Cloud" was replaced by "Someone Else's Computer" as it sounds a hell of a lot less magical.
The whole 1000+ simultaneous players thing makes no sense unless you can do some very clever peer-to-peer + view distance stuff as network traffic increases exponentially otherwise. Even if they paid for the computing horsepower, connectivity is always the bottleneck. I suppose that you could do other clever things with shuttling people between instances dependent on criteria like location/neighbouring entities/etc but that would be a nightmare to handle without lagging. All of this at a high-tick rate? yeah.. no.
*I received my BSc in Computer Science before the WWW existed (1994!) but ended up going down the corporate IT route so am not really involved in cutting edge stuff. I can still do the maths though!
It remains the Holy Grail for online connectivity in terms of twitch games. There is a reason that companies with vast resources, still rely on instanced game sessions - even MMOs.
The Planetside games which are twitch based and tout the largest number of clients in a session, still lagged - badly - when > 32 clients were in the general vicinity. And when they went for the GBWR record for the most clients connected to a session, it was unplayable. The record was about connectivity - not playability.
Eve Online - which isn't twitch based - literally invented a mass of software to host their game. And even so, when it's heavily populated in an area, they use time-dilated updates to keep every one in sync.
The only time that "1000 client instances" makes sense, is if they somehow - automagically solve the n+1 connectivity problem. Considering the clown shoes involved in the project, that's highly unlikely. Again, we're in year 6 and they haven't progressed beyond standard networking in the original CryEngine. So there's that.
The thing with cloud servers like AWS & GCE is that you can do all kinds of nifty things. But they were never designed for the demands of twitch based games. That's why very few use them. Heck, even some of my friends working on games for Microsoft with Azure, are finding this out. See the upcoming Crackdown game.
Basically, you can't have "1000 client instances". What you can have are "1000 client sessions" via inter-instance communications. This - which is basically rocket science - means something like this:
i1(n+250) // instance + client count
i2(n+250)
i3(n+250)
i4(n+250)
Those are 4 are
Amazon EC2 Dedicated Hosts running in Intel Xeon hardware server clusters. Also
see the AMI requirement and
what an EC2 is. You can also use
the free tier to test your app before jumping off a cliff and actually doing it.
This is the part where panic mode sets in. See those instance types, bandwidth caps etc? Yeah.
Without getting technical, with my above example you have a situation whereby they have to create 4 (or more) instances (copies) of the game.
i1 goes live, then gradually fills up with clients. As it gets filled up, because AWS charges for BOTH in/out bandwidth, the more clients, the higher the costs. It's a lot scarier than that.
i2, i3, i4, all go live - same as above.
Nobody in i1 is going to see or interact with anyone in the other instances. Even if you imagine this being a walled off garden whereby i1-client1 is parked on the edge, he will never see i2-client1. They can't see, shoot, or interact with each other. For all intent and purposes they know nothing about each other.
In order to have "1000 client" instances, you need to have 1000 clients in an instance. Which would mean 1000 clients being able to connect and interact with each other in the above. It's
IMPOSSIBLE. Period. End of story. And there isn't a single Xeon hardware server on AWS which would somehow automagically spawn an instance configured for 1000 clients in a twitch based game.
If you "stitch" the instances using clever tricks, such that you have 4 instances each with 250 clients, it's no longer "1000 client" instance, but rather a "1000 client" cluster. And in order to give the illusion of 1000 clients in the world, you have to somehow come up with inter- and intra- instance communications such that, using the walled garden example above, all clients within range can somehow see, chat, engage each other.
Well guess what? Now you're in alchemy territory. You now have an instance whereby i1-client1 fires a missile at i2-client1 and that missile travels through the i1 instance, reaches an area where it is destroyed and appears (re-created) at in i2 at the location of i2-client1 <---- that fool has probably already buggered off, died etc by the time the server code figures out that i1 just fired off a missile at a target in a remote instance which may or may no longer exist.
It gets better. That missile, along with all the calculations for i1-client1 and i2-client1, need to be calculated (God help you if you aren't using server-side arbitration - which by SC isn't using) on-the-fly and in real-time. All the time. Think of the bandwidth.
Now multiply the horrendous notion above to n+1 for a set of clients.
Then plan to be on vacation when the AWS bill shows up for that month.
Here's the hilarious part. Instead of planning to build this from the start, much like Frontier did, they decided to just wing it. And now, six years later, they're still stuck with the basic CryEngine networking layer.
What is even more hilarious is that - right from the start - Chris (it's in the Kickstarter, interviews etc) claimed he wasn't making an MMO. Then, out of the Blue, he was. Suspiciously that was after it dawned on them that they would make more money by selling the entire Verse as an MMO through the sale of assets. They would never - ever - have been able to raise this much money for a single player or session based game. But the fact is, assuming they deliver (which imo they won't) both of these games, the multiplayer is going to remain as it is now. A session based instanced game which will need a witch doctor to get it to handle more than 16 (let alone 1000) clients in combat.
Further reading to see how experts who thought long and hard about this before designing it; but still ended up with a less-than stellar solution to a massive problem:
VERY basic guide for ED networking
AWS re:Invent 2015 | (GAM403) From 0 to 60 Million Player Hours in 400B Star Systems
This is why most of who do this stuff for a living, and with decades under our belt, simply can't fathom how they could possibly be making these FALSE statements. Especially when you consider that when this whole thing collapses, and the lawsuits start flying, these are the sort of statements that are going to end up coming back to haunt them.
ps: When it comes to Star Citizen, the claims of "1000 player instances" is
pure fiction and rubbish.