Low GPU load in Odyssey in complicated scenes

Greetings. I am experiencing low GPU load at the settlements (something around 40% according to Windows task manager). This comes with extremely low FPS values. The CPU load is heavy (60-70-80%), but not 100%. Changing settings and lowering resolution does not affect anything (I use 1080p). Changing driver does nothing. Using any version of FSR does nothing. PC specs are obsolete, I understand it - i5 3350/RX570/16Gb. However, the game just does not use all available resources as I see it. Does anyone have similar issues? Maybe some ideas why would this happen? If so, I created an issue at the issue tracker: https://issues.frontierstore.net/issue-detail/42302
 
giphy.gif
 
Greetings. I am experiencing low GPU load at the settlements (something around 40% according to Windows task manager). This comes with extremely low FPS values. The CPU load is heavy (60-70-80%), but not 100%. Changing settings and lowering resolution does not affect anything (I use 1080p). Changing driver does nothing. Using any version of FSR does nothing. PC specs are obsolete, I understand it - i5 3350/RX570/16Gb. However, the game just does not use all available resources as I see it. Does anyone have similar issues? Maybe some ideas why would this happen? If so, I created an issue at the issue tracker: https://issues.frontierstore.net/issue-detail/42302
You're not alone by far. The working theory for cases like yours is that the CPU or maybe memory access is the bottleneck. The reason you wouldn't see the CPU pegged at 100% is that, over however many milliseconds it takes to build a frame, the CPU is 100% busy for a fraction of that, but then it is sitting idle waiting for data to move from main memory to the GPU, and again while the GPU does the rendering. Ideally of course the CPU could keep going and work ahead on other tasks, but that isn't always possible, and the current level of optimization may not take advantage of every chance to do so. But without special tools you won't be able to see those periods where the CPU (or one core or whatever) is saturated because standard resource monitors average over longer periods than the time taken to build a frame.
 
You're not alone by far. The working theory for cases like yours is that the CPU or maybe memory access is the bottleneck. The reason you wouldn't see the CPU pegged at 100% is that, over however many milliseconds it takes to build a frame, the CPU is 100% busy for a fraction of that, but then it is sitting idle waiting for data to move from main memory to the GPU, and again while the GPU does the rendering. Ideally of course the CPU could keep going and work ahead on other tasks, but that isn't always possible, and the current level of optimization may not take advantage of every chance to do so. But without special tools you won't be able to see those periods where the CPU (or one core or whatever) is saturated because standard resource monitors average over longer periods than the time taken to build a frame.
Given that this kind of CPU behaviour is not an issue in most other games, it comes down to being purely an optimization issue that even the world's fastest consumer CPU can't brute force through.
 
Greetings. I am experiencing low GPU load at the settlements (something around 40% according to Windows task manager). This comes with extremely low FPS values. The CPU load is heavy (60-70-80%), but not 100%. Changing settings and lowering resolution does not affect anything (I use 1080p). Changing driver does nothing. Using any version of FSR does nothing. PC specs are obsolete, I understand it - i5 3350/RX570/16Gb. However, the game just does not use all available resources as I see it. Does anyone have similar issues? Maybe some ideas why would this happen? If so, I created an issue at the issue tracker: https://issues.frontierstore.net/issue-detail/42302
For me it is exactly other way around CPU load around 30-40% GPU load 100%
I7 8700k and 5700XT. And performance is abysmal too.

That's just odyssey and the fact that some intelligent people decided sticking to Dx11 and not switching to Vulkan.
 
I've observed some similar things on an i7 8700k and 2080ti. Neither CPU or GPU are maxed out, hardly getting past 60% on either but FPS is still lower than you would expect in a settlement and can fluctuate a lot depending where I'm looking.

My guess is that there are threading issues that cause most of your CPU to be unused and that no amount of brute force can overcome. I suspect that in settlements for ex, that FPS is locked to <= 60 practically by how its coded, I think you could OC your GPU and CPU and memory and be likely to change almost nothing.
 
Given that this kind of CPU behaviour is not an issue in most other games, it comes down to being purely an optimization issue that even the world's fastest consumer CPU can't brute force through.
A notable difference is that most other games don't tend to be generating the geometry for entire planets algorithmically while they're doing other things, which would no doubt use a fair bit of CPU alongside the GPU. Having said that, my main frame rate drops are in station hangers, which is one of the few places where you definitely don't need to be seeing a planet. So, actually, I've no idea!
 
Lately I've not noticed persistent FPS problems, mine tend to be as I move around. It seems to be avoiding prefetching things that are pretty close to me that I just happened to have not looked at yet. Once I've looked everywhere around the settlement, its consistent. That seems to be an obvious area of improvement, as its pretty damn likely I'm going to turn around when I'm walking around a settlement. If you have the cpu ram and gpu ram, as well as CPU overhead and GPU overhead, it should be able to do some of this work before I turn around so I don't drop to 20fps and then back to ~50 after a few seconds.
 
Lately I've not noticed persistent FPS problems, mine tend to be as I move around. It seems to be avoiding prefetching things that are pretty close to me that I just happened to have not looked at yet. Once I've looked everywhere around the settlement, its consistent. That seems to be an obvious area of improvement, as its pretty damn likely I'm going to turn around when I'm walking around a settlement. If you have the cpu ram and gpu ram, as well as CPU overhead and GPU overhead, it should be able to do some of this work before I turn around so I don't drop to 20fps and then back to ~50 after a few seconds.

Smoke and glass create framerate drops. Go inside power building,stand in front of and look at the reactor while its being shut down.
 
Ah I've not looked at that specifically. I tend to get shot well before then (but thats another set of questions)! Being on the outside at settlements I've mostly observed that once I've looked in most directions and walked around, I stop having FPS drops to the 20's and it stays pretty stable at 50-60 but almost never gets above that (regardless of settings).
 
You're not alone by far. The working theory for cases like yours is that the CPU or maybe memory access is the bottleneck. The reason you wouldn't see the CPU pegged at 100% is that, over however many milliseconds it takes to build a frame, the CPU is 100% busy for a fraction of that, but then it is sitting idle waiting for data to move from main memory to the GPU, and again while the GPU does the rendering. Ideally of course the CPU could keep going and work ahead on other tasks, but that isn't always possible, and the current level of optimization may not take advantage of every chance to do so. But without special tools you won't be able to see those periods where the CPU (or one core or whatever) is saturated because standard resource monitors average over longer periods than the time taken to build a frame.

I did some testing in a highly repeatable area of the suit tutorial that tends to suffer from reduced GPU utilization, where highest reported per-core CPU utilization at a 1000ms polling rate also well below 100% (though 100ms polling rate did reveal transient spikes to 100%).

Potentially relevant system specs:
Ryzen 7 5800X
RAM
SK Hynix P31 NVMe (game SSD)
Radeon RX 6800 XT @ ~2600MHz core, 2088MHz memory

I started with a fixed all-core CPU clock of 4.6GHz (where I noticed the apparent CPU limitation/reduced GPU load in this particular scene at 1440p ultra) and recorded the frame rate saw at 400MHz intervals, down to 3GHz:

4.6GHz = ~109 fps
4.2GHz = ~105 fps
3.8GHz = ~98 fps
3.4GHz = ~93 fps
3.0GHz = ~85 fps

GPU utilization fell from the mid 90s percent at 4.6GHz to the 70s around 3.4GHz, before creeping back up because the GPU wasn't loaded heavily enough to boost all the way.

Despite reported per-core CPU load generally being well below 100% most of the time, there was clearly a real CPU limitation, though it wasn't linear. Despite a 53% increase in CPU clock, there was only a 28% in frame rate. This correlates well with the CPU load being jittery...increasing CPU performance only shortens the time spent at full utilization, not at the lower utilization periods between the spikes.
 
Do you think the CPU load is jittery when you are just looking at things you've already looked at or as you look at new things? I get the feeling they could be prepping resources in the background for things that you might look at soon to make it a lot smoother (and therefore, increase the average FPS if not max or constant FPS).
 
A notable difference is that most other games don't tend to be generating the geometry for entire planets algorithmically while they're doing other things, which would no doubt use a fair bit of CPU alongside the GPU. Having said that, my main frame rate drops are in station hangers, which is one of the few places where you definitely don't need to be seeing a planet. So, actually, I've no idea!
A good parallel is No Man's Sky, as it generates planets the exact same way as Elite does: algorithmically via a seed. This micro-spiking phenomenon is not present in NMS (i own it and have tested it). I'm certain there are other algorithmically-generated games out there that could back this up as well.

Besides, if Horizons was able to generate its planets algorithmically and not experience this micro-spiking of the CPU, there isn't really a justifiable reason Odyssey can't do that as well. Additionally, it isn't dynamically generating the planet from the seed actively while you are on it; that is generally done far before you ever get there (especially since you can view planets from the system map without even needing to be in the same star system). Once you land, the terrain has been generated, and all that it really needs to do after that is rendering and swapping out LOD models/textures.

I've also seen some claim that the poor performance is "normal" because it is "rendering an entire planet at once," which is wrong since any self respecting developer knows that frustrum and occlusion culling were specifically designed so that you didn't have to render everything at once. If EDO is rendering the entire planet constantly even when you aren't looking at it or can't see it, then the engine is broken.
 
Besides, if Horizons was able to generate its planets algorithmically and not experience this micro-spiking of the CPU, there isn't really a justifiable reason Odyssey can't do that as well. Additionally, it isn't dynamically generating the planet from the seed actively while you are on it; that is generally done far before you ever get there (especially since you can view planets from the system map without even needing to be in the same star system). Once you land, the terrain has been generated, and all that it really needs to do after that is rendering and swapping out LOD models/textures.
There'd be no way to generate an entire planets geometry in one go -- that would be terabytes of data -- so it's calculating things at the level of detail you're currently looking at. If you look at the most high resolution public domain NASA height data for Earth it's (from memory) 86400x43200 pixels resolution. And the UK is still a smudgy indistinct thing at this level of detail with Ben Nevis represented by a handful of pixels. And it's not just a case of adding more noise to make up the difference -- Odyssey has complex detail at small scales. I'm guessing this was done almost entirely at a shader level in Horizons but there's now a bit more CPU thinking going on.

I say all this with the possibility I'm very wrong!

But I don't think the planet stuff is causing the frame rate drops anyway!
 
In my experience, it's either particles or several light sources (light sources only sometimes).
Mining settlements are the doom of frame rates. Those ground-breaking and ear-shattering lasers? Particles. Everywhere. FPS go cry.
Settlements on fire sometimes too. My guess, if it's particle related, are the ones generated by fires. Either that or the fires themselves (that what you get for making a better fire than COD Ghosts on PC).
 
But I don't think the planet stuff is causing the frame rate drops anyway!
Yeah I agree on that. Wherever the issue is, I don't think it has anything to do with the algorithmic planet generation, as framerates are usually plenty stable when in the middle of nowhere on a planet. I can get 75fps even at native resolution there.

It's settlements that's the issue, and I can't quite figure out what exactly the CPU is doing there considering there's not really all THAT much going on there besides some basic AI pathing (assuming the settlement isn't on high alert) and you aren't covering all that much ground for the terrain generation to be taking up that many cycles. It's just...mystifying.
 
Back
Top Bottom