Given this example, I can see that there's a lot going on, but nowhere as much as my example
It sounds less, but it involves a similar number of distinct reference frames
- the player (and therefore the viewpoint for display) are on board the ship, which is rotating and moving within ...
- the station interior, which is rotating relative to ...
- the station instance, which is moving and rotating around ...
- the planet, which is moving through the star system
So if the player looks from their ship through the mailslot, their orientation relative to the background starfield - and therefore which bit of it they can see and which way up it is - is a combination of four separate reference frames.
Your example has the motions be at higher speed and less predictable, but I'll try to explain below why that doesn't matter.
What I'm hearing is that this nested approach somehow leads to a scenario where the coordinate changes of the free roaming avatars are essentially walled off compared to the coordinate changes of the ship moving at 55c, hence you stating that they are irrelevant. Am I understanding that correctly?
Yes.
You're walking about the deck of the ship, at ship coordinates 0,0,2 and you move to coordinates 0,1,3
The ship meanwhile moves from system coordinates [three extremely big numbers] to coordinates [three extremely big numbers]
Your final position (relative to the system) is then the "sum" of those two coordinates.
What a game developer could do instead (but very definitely shouldn't!) is say that you and the ship are both at slightly different [extremely big numbers], then when you try to move within the ship that gives you a very slightly different velocity to the ship, and so your final coordinates are [extremely big numbers] and the ship's final coordinates are [extremely big numbers] ... and at that point, yes, the calculation errors introduced by imprecision probably lead to you taking a single step and being catapulted several hundred kilometres outside the ship.
Which then begs the question; given the real time synchrony required for everything to happen, how is that achieved? Is it calculated via stacked interrupts or something of that nature?
I have no idea how Frontier does it, so can only say how I'd do it, of course ... but it shouldn't be all that difficult in principle. The game doesn't need to run in continuous time - it just needs to simulate quantised steps at least as fast as the display hardware can show them.
So a basic program flow could be like this
- determine objects relevant to the current instance, and what frame of reference "owns" those objects
- run their individual routines [1] to calculate their move for the next 1/60th second, relative to their owning frame of reference. Because the time step is so small, you can do this in series one after the other and the errors that introduces with collision detection or "I wouldn't have done that if I'd known you were going to do that first" should be so small as to be undetectable.
- usually there won't be any, but there might be a need to hand-off an object between two different frames of reference, so if so, calculate its current coordinates in that reference, add it to that reference and delete it from the old one. (Say, when a player ship moves from "system space" to "planet space" in supercruise)
- determine the position and current frame of reference of the camera
- calculate every object's position relative to the camera frame of reference
- run the display routines to show the picture
- back to the top for another run
(There are plenty of refinements possible here and Frontier probably has done at least some of them)
The key from this is that it doesn't really matter how fast or unpredictably something is moving - it's the same "was here, is now here" calculation at every 1/60 second step whether something is moving at 1m/s on a predictable ballistic trajectory, or at 55c being controlled by a player trying to break an interdiction, it's still just adding two numbers together in response to input conditions. If the player turns a completely different way in the next frame ... well, the game adds on those different numbers and carries on.
[1] For an NPC, determined by their AI. For a planet or asteroid, probably determined by an extremely simple "AI" which tells it to rotate and move on its orbit. For a player, determined by active controller inputs. For objects "owned" by another computer - e.g. another player - determined by an interpolation routine that receives updates from that other computer and tries to smooth out and guess trajectories (so, rubberbanding if it gets it wrong - but rubberbanding only
within the current reference frame for that object). The instance
itself might well have an "AI" which tells it to orbit the nearby planet.
For the network inputs, you might have a separate thread just listening for those, and adding them when they're received to the data set for each object. Then the next movement+display calculation done with that object has those network inputs ready and can make use of them. There's some incredible complexity in getting this to work properly and avoid obvious visual artefacts (which is well outside my expertise on the details!) but that's true even if everything is taking place in a single frame of reference at relatively low speeds.