Each client needs send the data for one ship.
Each client needs receive the data for 1000 ships/objects
Using Hanz figures, that is 288 bits upload, 288000 bits download (288 bits giving the XYZ coords plus the heading for ach of 1000 ships). I don't think it needs to receive the same data 1000 times.
The main load of the system is at the server end because it will need to send that data to the clients several times a second. But it shouldn't be packaging all 1000 pieces of info into one big package - it should be send in a small piece of info to each client individually. The servers bandwidth might thus be 100 MB/s but that should be easy enough for a data centre to handle and the client end can make do with a much smaller bandwidth
So, I cannot really follow Hanz's issue here. If the position and heading of each ship can be represented by 288 bits, then each client can receive the positional information for 1000 vessels with only 288000 bits. Even if we treble that to account for issues such as object identifiers, speed and other info then each client should be able to receive that data three times per second with a 3 Meg link, not 300.
But as I said, it's late and I am likely missing something. I'm just not seeing where he is getting a requirement for a 300 meg link.
He seems to be envisioning some kind of peer-to-peer setup with no central authority to do the heavy lifting of collecting and distributing data, that's all.
But even so, there's a point in all of that. Sure, the data centre should reasonably have good enough a connection to keep that feed going, but remember, this is just the
position data — the most minute and compact data the clients need to know, and it's only for the ships and only at a horribly low tick rate. Now add in all the supposed damage states and character stats and ship setups and decorations and performance data and projectiles and world effects etc etc etc. The packet for each client balloons
very quickly for each client, and you then slap the O(n²) problem on top of that and bring the whole thing up to a speed that is acceptable in an real-time action environment.
And that's for
one of these supposed fights. Using a generic service not meant for that kind of connectivity and
without the highly specialised custom setup that, say, CCP had to employ to get their (very low-bandwidth) 1k fights going. Even the server end will easily be choked in that situation, and it's not something you can just throw money and standardised components on to solve — CIG has to actually sit down and solve it on their end, in spite of whatever fancy network layer LY adds to the client. And speaking of throwing money at the problem, consider how much AWS bandwidth and computation sets you back in such a scenario… and again, that's just one fight. I'm guessing that both CIG and the citizens expect there to be more activity than that going on.