He was describing the bandwidth requirements for an instance with 1000 players.
Assuming a client-server setup, then the positional and heading data for each ship would take up 288 bits sent by each client to the server 3 times a second. So client upload bandwith requirements would be 864 bits every second. Of course, in reality there'd be a good bit more information sent but Hanz was simply looking at position and heading.
The server, however, would have to update 1000 clients with the information from 1000 ships....so its datasize would be 1000 times greater sending to 1000 clients so its vandwidth requirements rould be a million times more.
But even then....chicken feed for a data centre.
So I am not sure why he thinks each clients would require a 300 meg link just for positional and heading data when it appears only the central server might require that type of bandwidth
Again, that part is because he's envisioning a P2P scenario.
But it's not (just) a matter of bandwidth. It's a matter of actual data processing of an O(n²) problem in a real-time environment. The positional data is just an example of how even the tiniest thing will rapidly run away to the point where even something as simple and basic as sheer bandwidth becomes worrisome.
On the client end, the actual bandwidth requirement may not be all that bad, even with all the extra gunk CIG wants to throw in there, but even there, you have to actually do something with all that data, and that gets messy very quickly. You have to cycle through it, apply everything on every object and… oh no! New packet, let's start over! Or not? Did you go through everything? Do you wait until you can complete the last cycle, and just skip updates in-between? Oh, and this cycle, the roundtrip and timing was different so now you completed in time… I guess we just idle a while and wait for the next pa…what? Two packages at once? Damn you TCP/IP! Do we apply both? I sure hope there's nothing in there that is cumulative because then we always have to filter that out first to maintain a proper “state history”…
On the server end, you can conceivably parallelise some of that processing to deal with the exponential increase — no such luck on the client side. And again, the numbers start to look awful just with the most minute basics of data at an abysmally low tick-rate that wouldn't really qualify as real-time.
Last edited: