It's hard to spot as it's highly bursty, distributed across several worker threads, and can rapidly alternate between logical cores within a given physical core on SMT enabled parts. Monitoring software without a very rapid polling interval averages out the loads and rounds off the peaks. The effect of manipulating clock speed on FPS is also not very linear unless the number of worker threads is significantly reduced, which obscures it's detection via this method. There may be a dynamic element to this as well, which I mention below.
To get a better look at per core CPU utilization:
Previously I had hypothesized that it was a GPU-side rendering stall that was causing both CPU and GPU utilization to be reported low, and while I cannot rule out that possibility, on further investigation it seems more likely that low reported GPU load means the the GPU is waiting on the CPU (or another bottleneck), even if per-core CPU utilization is also reported low.
Unfortunately, there seems to be rapidly diminishing returns for overcoming this limitation with a faster CPU. The game is not particularly well threaded, and loading tons of workers doesn't help performance. Likewise, whatever is going on has so much overhead and/or produces such intermittent peak loads that there are also rapidly diminishing returns with increase per-core performance. My standard test of knocking a GHz off core clocks and looking for a decrease in frame rate didn't work very well here...performance did drop, but no where near proportionally.
Same CPU, same clocks, I have a CPU limited frame rate of about 80 in an abandoned surface settlement, but ~240 on the surface away from settlements. It's not AI, it's not physics, and it's not anything actually being drawn by the GPU because the GPU does actually seem to have surplus performance that can be spent in various ways without falling below the CPU-side limit.
Something is clearly wrong at the game engine level, but I'm not sure if the various phenomena reported are a cause or an effect. Culling issues, for example, could go either way.
Note that much of this assumes no overt bottlenecks elsewhere and a system with a relatively balanced CPU and GPU. If you are hitting a VRAM limit, or have an exceptionally weak CPU for your GPU or vice versa, you will probably notice much more significant improvements before diminishing returns kick in.
To get a better look at per core CPU utilization:
- Use a third party utility (like MSI AB, set the polling interval to 100ms or less, and have it graph the output so intermittent spikes can be identified.
- Either disable SMT/HT, or force the EliteDangerous64.exe process to an affinity of every other logical core.
- Optionally, edit the game's AppConfiguration.xml and reduce the number of worker threads; setting this to "1" will prevent the game from loading and "2" will seriously harm performance, so "3" or "4" is about as low as one can go to concentrate load before inducing a new bottleneck.
- Even more optionally and uncertainly, disable "PerformanceScaling" (set it to "0") in the same file. I'm not entirely sure what this is doing, but it may be dynamically scaling number of worker threads, which could further obscure things.
Previously I had hypothesized that it was a GPU-side rendering stall that was causing both CPU and GPU utilization to be reported low, and while I cannot rule out that possibility, on further investigation it seems more likely that low reported GPU load means the the GPU is waiting on the CPU (or another bottleneck), even if per-core CPU utilization is also reported low.
Unfortunately, there seems to be rapidly diminishing returns for overcoming this limitation with a faster CPU. The game is not particularly well threaded, and loading tons of workers doesn't help performance. Likewise, whatever is going on has so much overhead and/or produces such intermittent peak loads that there are also rapidly diminishing returns with increase per-core performance. My standard test of knocking a GHz off core clocks and looking for a decrease in frame rate didn't work very well here...performance did drop, but no where near proportionally.
Same CPU, same clocks, I have a CPU limited frame rate of about 80 in an abandoned surface settlement, but ~240 on the surface away from settlements. It's not AI, it's not physics, and it's not anything actually being drawn by the GPU because the GPU does actually seem to have surplus performance that can be spent in various ways without falling below the CPU-side limit.
Something is clearly wrong at the game engine level, but I'm not sure if the various phenomena reported are a cause or an effect. Culling issues, for example, could go either way.
Note that much of this assumes no overt bottlenecks elsewhere and a system with a relatively balanced CPU and GPU. If you are hitting a VRAM limit, or have an exceptionally weak CPU for your GPU or vice versa, you will probably notice much more significant improvements before diminishing returns kick in.