Game Discussions Star Citizen Discussion Thread v12

The thing that's nuts about 'Static Server Meshing' as proposed is: It doesn't scale. Or at least, it doesn't scale well.

Like initially it opens up Pyro, but then they hit the player cap again. They can't really open up another system, and definitely not 100 systems, with that alone. (Because what happens if everybody goes to Pyro etc...)

So they need to do one of the following:
  • Block access to any location that has hit the player cap. (So Jump Gates prevent access to solar systems, and/or Nine Tails style quantum black zones prevent approach to system locations)
  • Give in and use instances for locations.
  • Uncork the miracle that is 'dynamic server meshing'.

C ain't happening. It's either A or B.

Personally, I'd find instancing the best, imperfect, solution there. Getting locked out of a location just because it had hit a player cap would be fairly infuriating. (It's bad enough when you can't park at a station in Elite ;)). And forget having either dynamic or narrative events in a location. You'd just be queueing outside the fun zone instead ;)
 
That'd be a worst case scenario. But as he notes, networking engineers have to prepare for the worst case ;)
Usually we do that kind of planning BEFORE the project starts, so we can build it around the "core loop" which includes sending and receiving events, because it creates the whole structure for the project code. CiG just starting to think about it, 10 years in, means they are so out of their depth they will just deliver the basic "static meshing" a.k.a. old-school static cluster (as opposed to an elastic cloud) and be done with it for the next decade as it's a huge task by itself, since they'll have to rewrite most of the server code (again !). Whales better be prepared for a lot of additional funding.
(edit) i'll expand a bit. Usually before the project starts, we try and determine how we'll do the messaging (= communication between servers and clients) and there's a lot of important decisions there, like message size and grouping, which parts we can afford being "fire and forget" (and then replay if necessary) or which parts need to be "transactional" (keeping track of the message and waiting for an OK answer), what kind of latency we can afford (50ms ? 1 minute ? 24 hours ?), and then which kind of solution answers to these parameters (e.g. different levels of caching from none to everything, with how do we manage data freshness..), and then also the required infrastructure that could support that solution. Sometimes the answer is "none in the known universe" = we start over the whole process, adjusting parameters, maybe cooling down expectations from stakeholders... and the code itself has not started (apart from some proof of concept and benchmarks maybe). A game has very strict limitations in terms of latency for example, so one has to design their system without transactions (which take wayyy to much time) and around the strong possibility of lost messages - that means the state managed by the client has to stay coherent even if a message is skipped, which involves usually some kind of interpolation. Also in the "managing expectations" part, we would consider having everything move with predictable paths across the interpolation span (let's say the time between 3 messages, with one lost) - that means LOWER accelerations for entities in the game world, so that spline interpolation can still be quite correct in any case. Also means tracking less entities if possible - cant see why CiG went for "slow lasers" as they'll have to track each shot for example...
 
Last edited:
The thing that's nuts about 'Static Server Meshing' as proposed is: It doesn't scale. Or at least, it doesn't scale well.

Like initially it opens up Pyro, but then they hit the player cap again. They can't really open up another system, and definitely not 100 systems, with that alone. (Because what happens if everybody goes to Pyro etc...)

So they need to do one of the following:
  • Block access to any location that has hit the player cap. (So Jump Gates prevent access to solar systems, and/or Nine Tails style quantum black zones prevent approach to system locations)
  • Give in and use instances for locations.
  • Uncork the miracle that is 'dynamic server meshing'.

C ain't happening. It's either A or B.

Personally, I'd find instancing the best, imperfect, solution there. Getting locked out of a location just because it had hit a player cap would be fairly infuriating. (It's bad enough when you can't park at a station in Elite ;)). And forget having either dynamic or narrative events in a location. You'd just be queueing outside the fun zone instead ;)

There is another factor as well for CIG, costs.

At the moment costs can be kept as low as possible. Say 50 people per server is the max, then you can cap out. But once you start adding server meshing along with shards as described, you won't be able to hop between shards without logging out. So there will be groups of servers per shard. So you might have 10 people on the Stanton server, 20 people on the Nyx server, and 20 on the Pyro server, within the same shard. Using 3 times as many resources as before with 3 systems, which will only grow the more systems they add. And last i read, they didn't sound too optimistic about increasing the shard-wide cap.
 
And last i read, they didn't sound too optimistic about increasing the shard-wide cap.
It's still limited by their pile-of-fecal-matter networking code that they have to stick with since they cant hire any competent network engineer (as they are offering junior-level salary, and a really low pay for a junior at that) - this needs 15+years of experience with high concurrent user counts, and that kind of wizardry comes at a price that's at least double (almost triple) what CiG are offering.
To put things in perspective, there are messaging solutions that can handle about 2 millions clients with sub-20ms latency (for the message only ! add the processing on top ! and push, no request-reply !) and that are cloud native. But the whole system has to be designed around that. That's how Twitter, Instagram for example manage to stay afloat and still have really good performance.
 
Last edited:
You lot do realise I have to translate all this complicated dev speak into farming terminology in my head, don't you?

My version of 'server meshing' is leaving the gate open between 2 fields so the cows can walk freely between them rather than me having to open and shut the gate 20 times a day :whistle:

 
Last edited:
Usually we do that kind of planning BEFORE the project starts, so we can build it around the "core loop" which includes sending and receiving events, because it creates the whole structure for the project code. CiG just starting to think about it, 10 years in, means they are so out of their depth they will just deliver the basic "static meshing" a.k.a. old-school static cluster (as opposed to an elastic cloud) and be done with it for the next decade as it's a huge task by itself, since they'll have to rewrite most of the server code (again !). Whales better be prepared for a lot of additional funding.
(edit) i'll expand a bit. Usually before the project starts, we try and determine how we'll do the messaging (= communication between servers and clients) and there's a lot of important decisions there, like message size and grouping, which parts we can afford being "fire and forget" (and then replay if necessary) or which parts need to be "transactional" (keeping track of the message and waiting for an OK answer), what kind of latency we can afford (50ms ? 1 minute ? 24 hours ?), and then which kind of solution answers to these parameters (e.g. different levels of caching from none to everything, with how do we manage data freshness..), and then also the required infrastructure that could support that solution. Sometimes the answer is "none in the known universe" = we start over the whole process, adjusting parameters, maybe cooling down expectations from stakeholders... and the code itself has not started (apart from some proof of concept and benchmarks maybe). A game has very strict limitations in terms of latency for example, so one has to design their system without transactions (which take wayyy to much time) and around the strong possibility of lost messages - that means the state managed by the client has to stay coherent even if a message is skipped, which involves usually some kind of interpolation. Also in the "managing expectations" part, we would consider having everything move with predictable paths across the interpolation span (let's say the time between 3 messages, with one lost) - that means LOWER accelerations for entities in the game world, so that spline interpolation can still be quite correct in any case. Also means tracking less entities if possible - cant see why CiG went for "slow lasers" as they'll have to track each shot for example...

Great post, ta :)

Yeah I'm always wondering which gameplay areas may have to shift to accommodate any new networking scenario. And how clunky it must be to do it that way round. Ship speed & weapon type are the kind of simple design issue I can get my head around, with regard to sending/receiving. (And stuff like mission propagation, with regards to 'sharding' etc). But I always feel like there's a larger iceberg out there of stuff which hasn't been, and can't have been, tailored as yet. And will all have to be, or it will just hobble on in a particularly sub-optimal fashion.
 
There is another factor as well for CIG, costs.

At the moment costs can be kept as low as possible. Say 50 people per server is the max, then you can cap out. But once you start adding server meshing along with shards as described, you won't be able to hop between shards without logging out. So there will be groups of servers per shard. So you might have 10 people on the Stanton server, 20 people on the Nyx server, and 20 on the Pyro server, within the same shard. Using 3 times as many resources as before with 3 systems, which will only grow the more systems they add. And last i read, they didn't sound too optimistic about increasing the shard-wide cap.

Yeah they hinted as much in the Q&A:

...it’s possible that large ships such as a Javelin could have their own dedicated server assigned to run the authoritative simulation for that ship and everything on it. However, we’re trying to avoid having inflexible rules about how entities get assigned to processing resources, so that might not always be the case. It comes down to efficiency in terms of both processing speed and server costs. If we had a hard rule that each Javelin and everything in it gets its own server, then it wouldn’t be very cost-efficient when a Javelin only has a handful of players on it.
...because this is a static mesh and everything is fixed in advance, having more server nodes per shard also increases running costs. But we need to start somewhere, so the plan for the first version of Static Server Meshing is to start with as few server nodes per shard as we can while still testing that the tech actually works.

EDIT:

And of course there are the potential issues around their bandwidth use being very inefficient / costly etc.
 
Last edited:
It's still limited by their pile-of-fecal-matter networking code that they have to stick with since they cant hire any competent network engineer (as they are offering junior-level salary, and a really low pay for a junior at that) - this needs 15+years of experience with high concurrent user counts, and that kind of wizardry comes at a price that's at least double (almost triple) what CiG are offering.
To put things in perspective, there are messaging solutions that can handle about 2 millions clients with sub-20ms latency (for the message only ! add the processing on top ! and push, no request-reply !) and that are cloud native. But the whole system has to be designed around that. That's how Twitter, Instagram for example manage to stay afloat and still have really good performance.
Yeah even if salary was out of the equation, the main issue may basically reside in acceptance of being told no.
 
Yeah they hinted as much in the Q&A:
Get your BDSSE Pass for mere $20 and enjoy the perks!!! like being allocated your own “private server” within a shard of your choosing, the server comes with a blade (in-game item) that your can plug into your Idris’ blade cabinet to improve the odds of coming across other Idris owners!* Invite your friends to your “private server” and enjoy multiplayer’s galore

*conditions and others limitations may apply. BDSSE Pass is not refundable
 
You lot do realise I have to translate all this complicated dev speak into farming terminology in my head, don't you?

My version of 'server meshing' is leaving the gate open between 2 fields so the cows can walk freely between them rather than me having to open and shut the gate 20 times a day :whistle:


Ok, think of it like this. Think of network packets as being cows. To get as many cows through the farm gate as quick as possible, ideally you want them as thin as possible. What CIG have done is fed those cows to the max and are now trying to shove them through the gate all at once.
 
The thing that's nuts about 'Static Server Meshing' as proposed is: It doesn't scale. Or at least, it doesn't scale well.

Like initially it opens up Pyro, but then they hit the player cap again. They can't really open up another system, and definitely not 100 systems, with that alone. (Because what happens if everybody goes to Pyro etc...)

So they need to do one of the following:
  • Block access to any location that has hit the player cap. (So Jump Gates prevent access to solar systems, and/or Nine Tails style quantum black zones prevent approach to system locations)
  • Give in and use instances for locations.
  • Uncork the miracle that is 'dynamic server meshing'.

C ain't happening. It's either A or B.

Personally, I'd find instancing the best, imperfect, solution there. Getting locked out of a location just because it had hit a player cap would be fairly infuriating. (It's bad enough when you can't park at a station in Elite ;)). And forget having either dynamic or narrative events in a location. You'd just be queueing outside the fun zone instead ;)
CIG already showed and talked about what they're going to do, during the CitCon presentation:

1ElQzVG.png


They've chosen your option B "give in and use instances for locations". CIG just decided to call instances "shards" instead, to avoid the fallout of angry backers crying that the game would now be instanced like almost every other MMO game.
 
CIG already showed and talked about what they're going to do, during the CitCon presentation:

1ElQzVG.png


They've chosen your option B "give in and use instances for locations". CIG just decided to call instances "shards" instead, to avoid the fallout of angry backers crying that the game would now be instanced like almost every other MMO game.

Nah, that's just the first of many compromises ;)

They've got another problem which they know full well could be solved by instancing. (The server overload if everyone goes to one location etc). It's the obvious solution to this stuff:

Without mechanics to prevent every single player going to the same location, a large mega shard will be very hard to achieve... For example, there could be a mechanic to temporarily close jump points to crowded locations, or create new layers for certain locations.

They just don't want to say it outright, for fear of scaring the horses. (And possibly the cows too ;))

EDIT:

Alternative to instancing locations:

The other option I haven't mentioned is just breaking a location down into ever smaller hosted regions. (IE a server per Nkm2 or whatever. You'd get even more performance out of your server. But also: More issues with how to handle transition between servers; increased likelihood of combat trying to happen across those boundaries; comparatively greater server costs, potentially, as you can't mothball the server running an empty location, it has to stay up etc).

Both that and instancing involve a loss of 'seamlessness' etc and other negatives. So maybe CIG will just go for the big 'physicalised' seam of in-game explanations for why you can't go places you want to. (Closed jump gate / message regarding quantum lock down etc). But I can only imagine that salt would start to build up pretty mightily at those walls. (Assuming there are significant numbers playing ;)). There's gonna come a point where 'just instance and have a working, accessible game' will start to look pretty damn tempting ;)
 
Last edited:
Nah, that's just the first of many compromises ;)

They've got another problem which they know full well could be solved by instancing. (The server overload if everyone goes to one location etc). It's the obvious solution to this stuff:



They just don't want to say it outright, for fear of scaring the horses. (And possibly the cows too ;))

EDIT:

Alternative to instancing locations:

The other option I haven't mentioned is just breaking a location down into ever smaller hosted regions. (IE a server per Nkm2 or whatever. You'd get even more performance out of your server. But also: More issues with how to handle transition between servers; increased likelihood of combat trying to happen across those boundaries; comparatively greater server costs, potentially, as you can't mothball the server running an empty location, it has to stay up etc).

Both that and instancing involve a loss of 'seamlessness' etc and other negatives. So maybe CIG will just go for the big 'physicalised' seam of in-game explanations for why you can't go places you want to. (Closed jump gate / message regarding quantum lock down etc). But I can only imagine the salt would start to build up pretty mightily at those walls. (Assuming their are significant numbers playing ;)). There's gonna come a point where 'just instance and have a working, accessible game' will start to look pretty damn tempting ;)
Here is the complete yt-video: CitizenCon 2951: Server Meshing & The State Of Persistence
 

Yep cheers mate. Watched it all at the time ;)

(Summarised the follow up Q&A too. And the view of an MMO dev.)

I'm pretty happy with my amateur assessment of where they're at:

They're boned ;)

At least as far as delivering everything they've claimed goes: A single shard, server-meshed system hosting 100s of solar systems, 1000s of players in single locations, and giant capital ship battles. They can't do all of those things in concert, or anything close to it. And they know it.

But it will be interesting to see which compromises, and which pre-sold gameplay features, they opt to support in the end...
 
Last edited:
You have made mutant cows which require a hectare of grass a day. You can now only fit two cows into every field. Why did you do this again?

(Thankfully you have tasked your son with inventing Grass 3.0. He says it should be ready next year…)
But according to Tony Z, there will be a live economy between grass, alfalfa, and molasses-sprayed feed. The fields will dynamically translate hectares to acres to the metric system of the future! You will be able to have two cows (or tucows, if you're dropping in from the 90s) see two other bovines in another field, unless they are on massive multibovine carriers - in which case those trailers may be instances unto themselves. Why would there even need to be instancing if players are building stone fences or using natural tree lines to partition events anyway? Besides, a Golden Gurnsey vs. a standard heifor may seem like P2W, but it's really rather black and white according to your skill.

Have you fallen asleep yet? Then Tony Z is earning his paycheck.

Edit: I grew up on a dairy farm
 
Last edited:
But according to Tony Z, there will be a live economy between grass, alfalfa, and molasses-sprayed feed. The fields will dynamically translate hectares to acres to the metric system of the future! You will be able to have two cows (or tucows, if you're dropping in from the 90s) see two other bovines in another field, unless they are on massive multibovine carriers - in which case those trailers may be instances unto themselves. Why would there even need to be instancing if players are building stone fences or using natural tree lines to partition events anyway? Besides, a Golden Gurnsey vs. a standard heifor may seem like P2W, but it's really rather black and white according to your skill.

Have you fallen asleep yet? Then Tony Z is earning his paycheck.

Edit: I grew up on a dairy farm

Poor old Tony Z. Will this be the year he gets to sow his seeds?

Or will it just be another year of farming symposiums :/
 
You lot do realise I have to translate all this complicated dev speak into farming terminology in my head, don't you?

My version of 'server meshing' is leaving the gate open between 2 fields so the cows can walk freely between them rather than me having to open and shut the gate 20 times a day :whistle:

More like dividing each cow field in to multiple sub-fields that can each be managed by separate farmers, who coordinate by talking to each other with tin cans and string.
Or something, I mean it's not like CIG actually has any idea how to do it.
 
Back
Top Bottom