Clickbait title, I know. Please don't take anything I say here as concrete fact. Please fact check me. I'm just trying to get more discussion on this issue and I'm hoping for people with real knowledge and experience to hop in and share some info. No, it's not an open letter, and I won't be pinging Sally, I promise.
Performance varies from patch to patch, and while things have largely improved by virtue of optimizing assets, adding occlusion, and that sort of thing, performance is still disproportionately slow, even for high-end machines. Some might say, especially for high-end machines. The same can't be said for PCs at or below the minimum specs, which started with poor performance, and has seemingly only improved. On the other hand, PCs that have no issue running other new games at 2K, 4K, Ultra settings, at high refresh rates have disappointing results with Odyssey, which quite frankly should not be as tough to run as it is.
Why?
Utilization.
Let me share a comment that really interested me in this:
CPUs have cores, and each core can have two threads. These cores are capable of handling different tasks simultaneously. The CPU is responsible for many things, like taking player input, NPCs, AI, animations, and passing instructions to the GPU. If that CPU is unable to send instructions fast enough, the GPU will be working less, meaning less frames will be rendered.
Again, why would this happen? Why would a modern high-end CPU be the bottleneck here and struggle to send out instructions fast enough?
Let me share another quote:
So let's say for example you have an 8-core CPU. That's 16 threads. In a heavy scene, such as a settlement, there are many things to process, load and unload, such as all the objects, lights, NPCs, AI, sounds, animations, not to mention all of the usual background info that Elite already sends and receives. This is comparable to many other modern games, most of which run like you'd expect on a modern CPU. With Odyssey, we see something different.
This could explain why players with newer PCs aren't reporting the same performance gains that lower-end machines are. A higher-end PC would better handle the less optimized elements that Odyssey had early on, where older machines are benefiting now that things like glass shaders, lighting, and shadows are becoming less performance heavy.
So I understand why optimizations like these are being worked on, why we now have two different upscaling options, and why the next patch promises even more optimization and fixes. But I have not seen this mentioned as much, and to me it seems like everything else is a bandaid fix. We seem to be cleaning our gutters while our foundation is tiny and cracked.
Is this just a nothingburger and I'm misunderstanding something? Is it worse than I've stated? I'd like to know.
TL;DR: The game may not be utilizing your CPU's resources fully, resulting in the poor performance you may be familiar with.
Performance varies from patch to patch, and while things have largely improved by virtue of optimizing assets, adding occlusion, and that sort of thing, performance is still disproportionately slow, even for high-end machines. Some might say, especially for high-end machines. The same can't be said for PCs at or below the minimum specs, which started with poor performance, and has seemingly only improved. On the other hand, PCs that have no issue running other new games at 2K, 4K, Ultra settings, at high refresh rates have disappointing results with Odyssey, which quite frankly should not be as tough to run as it is.
Why?
Utilization.
Let me share a comment that really interested me in this:
The current level of "pretty" (graphical fidelity) should perform way better than it does right now. It's just very poorly optimized.
I've ran some experiments around on where the problem lies, and there is a lot of thread waiting going on somewhere in the pipeline. It's not a case of graphics are too heavy, because even on planet surfaces when the framerate drops down and what not, CPU and GPU usage actually go down. It's not a simple graphics overload, otherwise you'd see the GPU getting stuck at 100% utilization, but it actually goes down. What is probably happening is a thread or various threads are taking too long or have too much put into them, but they're calculating results that other things need to begin working, so the rest of the threads as well as the graphics rendering pipeline is getting stuck in a "hurry up and wait" mode with nothing to actually do until the other thread completes it's tasks to give the other threads the info they need to begin their calculations.
TLDR: The hangup doesn't seem to be with graphic rendering work itself being too much, but rather the info needed to start working on the render is taking too long.
CPUs have cores, and each core can have two threads. These cores are capable of handling different tasks simultaneously. The CPU is responsible for many things, like taking player input, NPCs, AI, animations, and passing instructions to the GPU. If that CPU is unable to send instructions fast enough, the GPU will be working less, meaning less frames will be rendered.
Again, why would this happen? Why would a modern high-end CPU be the bottleneck here and struggle to send out instructions fast enough?
Let me share another quote:
However, the game only has 1-2 primary game/render threads no matter how many worker threads it spawns and these are where the bottleneck in EDO is.
So let's say for example you have an 8-core CPU. That's 16 threads. In a heavy scene, such as a settlement, there are many things to process, load and unload, such as all the objects, lights, NPCs, AI, sounds, animations, not to mention all of the usual background info that Elite already sends and receives. This is comparable to many other modern games, most of which run like you'd expect on a modern CPU. With Odyssey, we see something different.


This could explain why players with newer PCs aren't reporting the same performance gains that lower-end machines are. A higher-end PC would better handle the less optimized elements that Odyssey had early on, where older machines are benefiting now that things like glass shaders, lighting, and shadows are becoming less performance heavy.
So I understand why optimizations like these are being worked on, why we now have two different upscaling options, and why the next patch promises even more optimization and fixes. But I have not seen this mentioned as much, and to me it seems like everything else is a bandaid fix. We seem to be cleaning our gutters while our foundation is tiny and cracked.
Is this just a nothingburger and I'm misunderstanding something? Is it worse than I've stated? I'd like to know.
TL;DR: The game may not be utilizing your CPU's resources fully, resulting in the poor performance you may be familiar with.