Back during the alpha Dr Ross said there was a "bug" in the occlusion system, and they turned it off.
The implication was the game engine was brute force rendering EVERYTHING.
I don't know who said it, but someone said having procedural generation means you can have endless content, but you still have to decide on which triangles to render to the screen.
To date, every iteration to odyssey has seen improvements to LODing, and shader optimization.
And if you don't know what LOD is,
Level of Detail.
When game assets are created they have duplicates with varying sizes of triangles(or polygons if you are old-fashioned).
When you have a game that renders everything from a footprint in the dust on a planet's surface to the whole Galaxy, things can get a little complex.
The Issue then becomes, "when to dial the graphics up and down".
As we can see in the picture we can see the "popping" effect of the node-list of things lod'ing into view.
Attempts to get a seamless transition from background art to foreground 3D asset, has been at the forefront of rendering issues since we've had games.
regardless of technical solution, it is irrespective, It doesn't matter if it's image-based rendered sky-map or an "endless city" tesselation tech demo, procedural landscapes of Minecraft, or the planets of Elite Dangerous.
If you want it "seamless", you need to know that there is a mountain on the horizon occluding the sun, even if you got your back turned to it in order to "fake" that the complexity of that render for when the player turns around to face the sun.
Problem 1)
So your renderer needs to have an omnipotent overview of everything so it can then decide on what it should be kept in memory and then render onscreen and at what level of detail.
Problem 2)
However, games are MORE than just graphics from the players' perspectives.
So for example, AI. If you have AI running around a settlement. Those AI have paths to follow, locations to be in.
When you fly by, those assets need to have their own private LOD to know where they can be walking or not.
If we stuck to the players LOD, then far away (LOD_0), the AI would have BOX with no FLOORS or Folliage, only when the player comes closer would the AI get access to the LOD4 and suddenly has a panic attack as their path is obstructed by the tree that popped into existence.
Problem 3)
Assets are jigsawed together from component parts. The "generation" system must be "aware" of how to design Settlements, settlement buildings, and station interiors and does that on "the fly". so when does "that" occur, in orbit? when you enter the system?
And if you have environmental (cube) mapping that's generated on the fly. Area's which are going to be "pressurized" and in a vacuum, then you got to enclose with boundaries.
And we've already seen a bounding / flood-fill) error of the AI, as the pathfinding algorithm for the AI to navigate rocks, was path-finding (was it the whole planet? or just silly amount outside the bounds of the settlement.
Another fix was rendering the players in full detail in their ships as they flew over.
And my brain has frozen, as this is just the tip of the iceberg.
I'd love to know how the "optimizations" are going if the "occlusion" system is still "bugged",
and even for Frontier to talk about the development process of how does this even work.
I'm hoping the rubber duck debugging may help.
With years of web development of mad debates onto "when" to background load assets and how to sleight of hand transitions, (because packets can be lost on the way and can't be loaded on the fly as internet speeds were not fast enough, so effects (such as a roll-overs and pop-ups, needed to be pre-loaded before you needed them).
And I just can't help but shake off "bugs" from my mind I encountered that sucker-punched all the developers on the project.
Standard system code for enable / Disable is not the same as something being standard system code for invisible Visible because semantically, they could be "interpreted" by a developer(overworked and exhausted) as being logically equivalent. One dev says something needs to be visible, but the other dev does so, but via the wrong method.
The issue gets compounded when there is a developer-created alternative for Enable / disable, visible / Invisible.
The developer code, does a laundry list of what is needed to be done for the software for that "time", but not for every situation, and there is some under-the-hood turning on/off layers that mess everything up for everyone else at a later date. But unless you (painfully) step-through through the code and read up on the properties, you might never know.
Another one, was OS-specific of the naming of layers, filenames, and the scripts that found them. Upper case, lower case, incorrectly none utf-8 encoded strings, or files, or just plain old misunderstanding between a naming convention (minus instead of underscore).
And if the problems are all combined, Assets are loaded in, with the wrong bounding boxes being enabled/disabled, and are visible to the wrong parties at the wrong time, causing, and everytime you think you've "fixed" the issue it still comes back to haunt you. because testing on a moving codebase and assets is never easy
The implication was the game engine was brute force rendering EVERYTHING.
I don't know who said it, but someone said having procedural generation means you can have endless content, but you still have to decide on which triangles to render to the screen.
To date, every iteration to odyssey has seen improvements to LODing, and shader optimization.
And if you don't know what LOD is,
Level of Detail.
When game assets are created they have duplicates with varying sizes of triangles(or polygons if you are old-fashioned).

When you have a game that renders everything from a footprint in the dust on a planet's surface to the whole Galaxy, things can get a little complex.
The Issue then becomes, "when to dial the graphics up and down".
As we can see in the picture we can see the "popping" effect of the node-list of things lod'ing into view.
Attempts to get a seamless transition from background art to foreground 3D asset, has been at the forefront of rendering issues since we've had games.
regardless of technical solution, it is irrespective, It doesn't matter if it's image-based rendered sky-map or an "endless city" tesselation tech demo, procedural landscapes of Minecraft, or the planets of Elite Dangerous.
If you want it "seamless", you need to know that there is a mountain on the horizon occluding the sun, even if you got your back turned to it in order to "fake" that the complexity of that render for when the player turns around to face the sun.
Problem 1)
So your renderer needs to have an omnipotent overview of everything so it can then decide on what it should be kept in memory and then render onscreen and at what level of detail.
Problem 2)
However, games are MORE than just graphics from the players' perspectives.
So for example, AI. If you have AI running around a settlement. Those AI have paths to follow, locations to be in.
When you fly by, those assets need to have their own private LOD to know where they can be walking or not.
If we stuck to the players LOD, then far away (LOD_0), the AI would have BOX with no FLOORS or Folliage, only when the player comes closer would the AI get access to the LOD4 and suddenly has a panic attack as their path is obstructed by the tree that popped into existence.
Problem 3)
Assets are jigsawed together from component parts. The "generation" system must be "aware" of how to design Settlements, settlement buildings, and station interiors and does that on "the fly". so when does "that" occur, in orbit? when you enter the system?
And if you have environmental (cube) mapping that's generated on the fly. Area's which are going to be "pressurized" and in a vacuum, then you got to enclose with boundaries.
And we've already seen a bounding / flood-fill) error of the AI, as the pathfinding algorithm for the AI to navigate rocks, was path-finding (was it the whole planet? or just silly amount outside the bounds of the settlement.
Another fix was rendering the players in full detail in their ships as they flew over.
And my brain has frozen, as this is just the tip of the iceberg.
I'd love to know how the "optimizations" are going if the "occlusion" system is still "bugged",
and even for Frontier to talk about the development process of how does this even work.
I'm hoping the rubber duck debugging may help.
With years of web development of mad debates onto "when" to background load assets and how to sleight of hand transitions, (because packets can be lost on the way and can't be loaded on the fly as internet speeds were not fast enough, so effects (such as a roll-overs and pop-ups, needed to be pre-loaded before you needed them).
And I just can't help but shake off "bugs" from my mind I encountered that sucker-punched all the developers on the project.
Standard system code for enable / Disable is not the same as something being standard system code for invisible Visible because semantically, they could be "interpreted" by a developer(overworked and exhausted) as being logically equivalent. One dev says something needs to be visible, but the other dev does so, but via the wrong method.
The issue gets compounded when there is a developer-created alternative for Enable / disable, visible / Invisible.
The developer code, does a laundry list of what is needed to be done for the software for that "time", but not for every situation, and there is some under-the-hood turning on/off layers that mess everything up for everyone else at a later date. But unless you (painfully) step-through through the code and read up on the properties, you might never know.
Another one, was OS-specific of the naming of layers, filenames, and the scripts that found them. Upper case, lower case, incorrectly none utf-8 encoded strings, or files, or just plain old misunderstanding between a naming convention (minus instead of underscore).
And if the problems are all combined, Assets are loaded in, with the wrong bounding boxes being enabled/disabled, and are visible to the wrong parties at the wrong time, causing, and everytime you think you've "fixed" the issue it still comes back to haunt you. because testing on a moving codebase and assets is never easy