Alright, I was half-wrong; there does appear to be a CPU-side limitation in settlements...

It's hard to spot as it's highly bursty, distributed across several worker threads, and can rapidly alternate between logical cores within a given physical core on SMT enabled parts. Monitoring software without a very rapid polling interval averages out the loads and rounds off the peaks. The effect of manipulating clock speed on FPS is also not very linear unless the number of worker threads is significantly reduced, which obscures it's detection via this method. There may be a dynamic element to this as well, which I mention below.

To get a better look at per core CPU utilization:

  • Use a third party utility (like MSI AB, set the polling interval to 100ms or less, and have it graph the output so intermittent spikes can be identified.
  • Either disable SMT/HT, or force the EliteDangerous64.exe process to an affinity of every other logical core.
  • Optionally, edit the game's AppConfiguration.xml and reduce the number of worker threads; setting this to "1" will prevent the game from loading and "2" will seriously harm performance, so "3" or "4" is about as low as one can go to concentrate load before inducing a new bottleneck.
  • Even more optionally and uncertainly, disable "PerformanceScaling" (set it to "0") in the same file. I'm not entirely sure what this is doing, but it may be dynamically scaling number of worker threads, which could further obscure things.

Previously I had hypothesized that it was a GPU-side rendering stall that was causing both CPU and GPU utilization to be reported low, and while I cannot rule out that possibility, on further investigation it seems more likely that low reported GPU load means the the GPU is waiting on the CPU (or another bottleneck), even if per-core CPU utilization is also reported low.

Unfortunately, there seems to be rapidly diminishing returns for overcoming this limitation with a faster CPU. The game is not particularly well threaded, and loading tons of workers doesn't help performance. Likewise, whatever is going on has so much overhead and/or produces such intermittent peak loads that there are also rapidly diminishing returns with increase per-core performance. My standard test of knocking a GHz off core clocks and looking for a decrease in frame rate didn't work very well here...performance did drop, but no where near proportionally.

Same CPU, same clocks, I have a CPU limited frame rate of about 80 in an abandoned surface settlement, but ~240 on the surface away from settlements. It's not AI, it's not physics, and it's not anything actually being drawn by the GPU because the GPU does actually seem to have surplus performance that can be spent in various ways without falling below the CPU-side limit.

Something is clearly wrong at the game engine level, but I'm not sure if the various phenomena reported are a cause or an effect. Culling issues, for example, could go either way.

Note that much of this assumes no overt bottlenecks elsewhere and a system with a relatively balanced CPU and GPU. If you are hitting a VRAM limit, or have an exceptionally weak CPU for your GPU or vice versa, you will probably notice much more significant improvements before diminishing returns kick in.
 
Morbad, I have a feeling you have more fun trying to analyze why Odyssey performs poorly compared to actually playing the game.

:)

I wish I could contribute to your thread, but I am utterly clueless about computers.

P.S. I am playing Borderlands 3 right now and the difference in immersion and fps gameplay as compared to Odyssey is incredibly painful. The guns feel really good and even have stats; the AI are super fun and wacko. The deluxe version is on sale on Steam. I think you might like it.

[EDIT: and there is hardly any grinding! ]
 
It's hard to spot as it's highly bursty, distributed across several worker threads, and can rapidly alternate between logical cores within a given physical core on SMT enabled parts. Monitoring software without a very rapid polling interval averages out the loads and rounds off the peaks. The effect of manipulating clock speed on FPS is also not very linear unless the number of worker threads is significantly reduced, which obscures it's detection via this method. There may be a dynamic element to this as well, which I mention below.
…..
Note that much of this assumes no overt bottlenecks elsewhere and a system with a relatively balanced CPU and GPU. If you are hitting a VRAM limit, or have an exceptionally weak CPU for your GPU or vice versa, you will probably notice much more significant improvements before diminishing returns kick in.
The analysis of the graphics algo over on Reddit revealed that the client does a lot of function calls and small transfers for each frame which is big overhead and not a happy time for the CPU.

For reference:
Odyssey renderer is broken - details : EliteDangerous (reddit.com)
 
Yeh, it's not CPU, i7 4770 only 35% is used. It seems like memory leak for me there or / and un-interrupted threads. Like something keep working on BG and results of it are ignored. Such a thing could happen like when you send interrupt thread signal, but don't join it, so it will stop somewhen later maybe.
 
It's hard to spot as it's highly bursty, distributed across several worker threads, and can rapidly alternate between logical cores within a given physical core on SMT enabled parts. Monitoring software without a very rapid polling interval averages out the loads and rounds off the peaks. The effect of manipulating clock speed on FPS is also not very linear unless the number of worker threads is significantly reduced, which obscures it's detection via this method. There may be a dynamic element to this as well, which I mention below.

To get a better look at per core CPU utilization:

  • Use a third party utility (like MSI AB, set the polling interval to 100ms or less, and have it graph the output so intermittent spikes can be identified.
  • Either disable SMT/HT, or force the EliteDangerous64.exe process to an affinity of every other logical core.
  • Optionally, edit the game's AppConfiguration.xml and reduce the number of worker threads; setting this to "1" will prevent the game from loading and "2" will seriously harm performance, so "3" or "4" is about as low as one can go to concentrate load before inducing a new bottleneck.
  • Even more optionally and uncertainly, disable "PerformanceScaling" (set it to "0") in the same file. I'm not entirely sure what this is doing, but it may be dynamically scaling number of worker threads, which could further obscure things.

Previously I had hypothesized that it was a GPU-side rendering stall that was causing both CPU and GPU utilization to be reported low, and while I cannot rule out that possibility, on further investigation it seems more likely that low reported GPU load means the the GPU is waiting on the CPU (or another bottleneck), even if per-core CPU utilization is also reported low.

Unfortunately, there seems to be rapidly diminishing returns for overcoming this limitation with a faster CPU. The game is not particularly well threaded, and loading tons of workers doesn't help performance. Likewise, whatever is going on has so much overhead and/or produces such intermittent peak loads that there are also rapidly diminishing returns with increase per-core performance. My standard test of knocking a GHz off core clocks and looking for a decrease in frame rate didn't work very well here...performance did drop, but no where near proportionally.

Same CPU, same clocks, I have a CPU limited frame rate of about 80 in an abandoned surface settlement, but ~240 on the surface away from settlements. It's not AI, it's not physics, and it's not anything actually being drawn by the GPU because the GPU does actually seem to have surplus performance that can be spent in various ways without falling below the CPU-side limit.

Something is clearly wrong at the game engine level, but I'm not sure if the various phenomena reported are a cause or an effect. Culling issues, for example, could go either way.

Note that much of this assumes no overt bottlenecks elsewhere and a system with a relatively balanced CPU and GPU. If you are hitting a VRAM limit, or have an exceptionally weak CPU for your GPU or vice versa, you will probably notice much more significant improvements before diminishing returns kick in.
Interesting, and this brings up something that has been an issue on my system for a year or so due to audio production software I use on this machine.

I use steinberg's cubase 10.5 pro as my primary DAW(digital audio workstation) and a handful of other VSTs. Cubase creates a VERY sensitive signal pathway for DSP, and does not like interruptions in its ASIO internal processing stack. Microsoft, Intel, and some others have messed with the OS-level multitasking (hyperthreading) scheduler, which has caused some chipset issues to do with power management p-states, SpeedStep tech, etc.

Cubase experiences clicks and pops in its audio stream when CPU realtime power management is enabled along with hyperthreading. When those pops/clicks occur, actual system CPU usage is low/very low, with only 1 or 2 cores in use. Internal CPU meter inside cubase show MASSIVE SPIKES, up to 100%.

If I disable HT(all my virtual cores) on my system's BIOS, cubase is happy and the audio glitches go away.
If I ENABLE those cores, but disable CPU power management features, the audio glitches also go away.
Steinberg claims that "multimedia threading limits imposed by microsoft" are causing their software to be unable to create enough threads to handle timely processing.

Right now, my ED:O client is seeing good FPS, after the patch 2 nights ago. At the same time, I see others continuing to complain of low FPS inside buildings, even after the patch. I was confused about how that might be, but then I saw this and remembered -- my BIOS is modified to disable CPU power management, and my CPU is locked at 4ghz. Could this be why, right now, I have good performance (60FPS at 4k, ultra, ultraforcapture) while others are still complaining about this???
 
Although I'm not 100% certain, I believe there's a memory leak somewhere. Specifically with the "UltraForCapture" graphics setting maybe. I had it enabled in-game and after 2 to 3 hours of gaming, Elite starts dropping frames like a maniac. It jumps from the high 80s to the low 20s in an instant and does not stop stuttering, even in space. Reverting back to ultra seems to have fixed the issue and I no longer get insane frame drops and stuttering after long gaming sessions. It was incredibly strange, I checked out my task manager to make sure it wasn't something else causing the frame drops and stuttering but both my 3700X and RTX 2080 Ti were running as they should.
 

Deleted member 275835

D
Hm. I don't do anything more complex than SQL and a bit of half-baked python for fun, but my default approach (call it efficiency/laziness) to fixing poorly optimised scripts is to re-write them from scratch. Not sure I want to contemplate F-Dev having to unpick all of these issues - especially not with another imposed deadline of august or whatever console release was meant to be.
 
Last edited by a moderator:
Given that at least one seemingly very experienced graphics engineer has analysed EDO's rendering methodology and found it extremely wanting (as someone above has linked to), and given that when I run EDO on Linux and used mangohud to display what the GPU and CPU are doing at the time, and have found that my FPS tanks when the GPU is stuck on 100% utilization (I've found that running ED on linux with DXVK seems to amplify any underlying problems with ED/EDH/EDO), then if what you are positing is true, the situation with EDO is a combination of GPU and CPU shenanigans.

If I were an FDEV developer, I'd have looked at the rendering analysis first and perhaps my primary objective would be to sort out the apparently b0rked rendering. That would then perhaps leave the CPU oddities you highlight in your OP.

I realise that running ED/EDH/EDO isn't officially supported, but as I said above it generally tends to amplify Things Not Going Right in the game, and what I've found is that VRAM tends to fill up very quickly - especially when calling up the system map. Almost as if rendering the EDO system map (for which FDEV changed how it appears early in the Alphas because it was taking too long to come up) is doing something not according to plan, quickly fills up VRAM, and for whatever reason spikes my GPU (RTX 2070 Super) to 100%, eventually causing my FPS to plunge to 12FPS in a concourse.
 
Although I'm not 100% certain. I believe there's a memory leak somewhere, specifically with the "UltraForCapture" graphics setting. I had it enabled in-game and after 2 to 3 hours of gaming, Elite starts dropping frames like a maniac. It jumps from the high 80s to the low 20s in an instant and does not stop stuttering, even in space. Reverting back to ultra seems to have fixed the issue and I no longer get insane frame drops and stuttering after long gaming sessions. It was incredibly strange, I checked out my task manager to make sure it wasn't something else causing the frame drops and stuttering but both my 3700X and RTX 2080 Ti were running as they should.
Can confirm -- last night I played for 4-5 hours. Actually finished some missions, started to learn the on-foot mechanics a bit. FPS was locked at 60 the entire time - in station while in ships, and in buildings while on foot. Nice and smooth. I use the UltraForCapture setting, ultra, 4k.

Then, suddenly, I landed and got out at some random station and my FPS was back down to like 30-40, like it was before the patch. About 10 minutes later, it went away and was back to 60FPS. I use a 3080 FTW3, i7 7800x 6/12 core @ 4ghz.
 
Can confirm -- last night I played for 4-5 hours. Actually finished some missions, started to learn the on-foot mechanics a bit. FPS was locked at 60 the entire time - in station while in ships, and in buildings while on foot. Nice and smooth. I use the UltraForCapture setting, ultra, 4k.

Then, suddenly, I landed and got out at some random station and my FPS was back down to like 30-40, like it was before the patch. About 10 minutes later, it went away and was back to 60FPS. I use a 3080 FTW3, i7 7800x 6/12 core @ 4ghz.
Yes, I also play at 4k. Perhaps "UltraForCapture" fills the GPU's VRAM causing it to spill into system memory, thus causing frame drops and stutters.
 
Yeh, it's not CPU, i7 4770 only 35% is used. It seems like memory leak for me there or / and un-interrupted threads. Like something keep working on BG and results of it are ignored. Such a thing could happen like when you send interrupt thread signal, but don't join it, so it will stop somewhen later maybe.
Average CPU utilization won't answer a lot of questions if you don't know how well (or poorly) the game threads or if the load is spiky.
35% CPU could mean silky smooth frames or it could mean an unplayable mess that's stuttering 1/3 of the time.
 
I have assumption that "take screenshot - F10" does memory leak...point is, my all games use F10 as mouse bound key, don't ask why - historical for 10 years. So I set weapons switch for ships, shields on/off for foots on F10 too ...and meanwhile Elite takes screenshot. However, in Horizons I bound screenshot folder to /dev/null (I play on Linux), which means all files are trashed once Elite finish save of it.
So on Horizons it was ok - weapon was switching + screenshot taken & trashed.
Now....on Odessey once I start to turn shield on/off FPS drops happens. This may mean screenshoting (pressing F10) do this.
 
Yes, I also play at 4k. Perhaps "UltraForCapture" fills the GPU's VRAM causing it to spill into system memory, thus causing frame drops and stutters.
I briefly thought this too, but I monitor VRAM usage alongside DRAM, and when I have big stutters and fps drops, my VRAM is only maybe 75% used. Sometimes less.
 
It's hard to spot as it's highly bursty, distributed across several worker threads, and can rapidly alternate between logical cores within a given physical core on SMT enabled parts. Monitoring software without a very rapid polling interval averages out the loads and rounds off the peaks. The effect of manipulating clock speed on FPS is also not very linear unless the number of worker threads is significantly reduced, which obscures it's detection via this method. There may be a dynamic element to this as well, which I mention below.

To get a better look at per core CPU utilization:

  • Use a third party utility (like MSI AB, set the polling interval to 100ms or less, and have it graph the output so intermittent spikes can be identified.
  • Either disable SMT/HT, or force the EliteDangerous64.exe process to an affinity of every other logical core.
  • Optionally, edit the game's AppConfiguration.xml and reduce the number of worker threads; setting this to "1" will prevent the game from loading and "2" will seriously harm performance, so "3" or "4" is about as low as one can go to concentrate load before inducing a new bottleneck.
  • Even more optionally and uncertainly, disable "PerformanceScaling" (set it to "0") in the same file. I'm not entirely sure what this is doing, but it may be dynamically scaling number of worker threads, which could further obscure things.

Previously I had hypothesized that it was a GPU-side rendering stall that was causing both CPU and GPU utilization to be reported low, and while I cannot rule out that possibility, on further investigation it seems more likely that low reported GPU load means the the GPU is waiting on the CPU (or another bottleneck), even if per-core CPU utilization is also reported low.

Unfortunately, there seems to be rapidly diminishing returns for overcoming this limitation with a faster CPU. The game is not particularly well threaded, and loading tons of workers doesn't help performance. Likewise, whatever is going on has so much overhead and/or produces such intermittent peak loads that there are also rapidly diminishing returns with increase per-core performance. My standard test of knocking a GHz off core clocks and looking for a decrease in frame rate didn't work very well here...performance did drop, but no where near proportionally.

Same CPU, same clocks, I have a CPU limited frame rate of about 80 in an abandoned surface settlement, but ~240 on the surface away from settlements. It's not AI, it's not physics, and it's not anything actually being drawn by the GPU because the GPU does actually seem to have surplus performance that can be spent in various ways without falling below the CPU-side limit.

Something is clearly wrong at the game engine level, but I'm not sure if the various phenomena reported are a cause or an effect. Culling issues, for example, could go either way.

Note that much of this assumes no overt bottlenecks elsewhere and a system with a relatively balanced CPU and GPU. If you are hitting a VRAM limit, or have an exceptionally weak CPU for your GPU or vice versa, you will probably notice much more significant improvements before diminishing returns kick in.
The analysis of the graphics algo over on Reddit revealed that the client does a lot of function calls and small transfers for each frame which is big overhead and not a happy time for the CPU.
This would match my own experiences, although I did not do such in-depth testing. My CPU seems to be almost constantly at 100% on planetary bases and in the concourse, while my graphics card is barely ever running at full capacity.
 
Regarding ultraforcapture, I haven't been using it much as it causes serious terrain generation issues on my main system. It usually looks fine from the ship, but on foot it produces some serious terrain morphing issues at close range.

The analysis of the graphics algo over on Reddit revealed that the client does a lot of function calls and small transfers for each frame which is big overhead and not a happy time for the CPU.

I've seen it. I just had difficulting pinning down an actual CPU limitation on my systems and came to the conclusion that, whatever the game was doing, it was stalling out the GPU first, which does not always seem to be the case, even when reported CPU utilization is low.

For example:
Source: https://www.youtube.com/watch?v=4ABLP-Wg9QI


That's at the beginning of the suit tutorial mission, with GPU (and FB) utilization and all logical core utilization being reported at a 500ms interval. Neither any core, nor the GPU appeared to be maxed out and I wasn't certain where the bottleneck was (it sure wasn't I/O, almost certainly wasn't memory). I had suspected that it was a GPU-side geometry/vertex bottleneck from all that unoccluded settlement nearby, causing the GPU to stall, and report unused cycles while waiting on delays from that internal bottleneck.

However, looking at things more closely, I did find spikes of CPU utlization well in excess of what's reported here, that were simply hidden by an insufficiently fast polling rate and rapid changes to where some threads were being scheduled.

Yeh, it's not CPU, i7 4770 only 35% is used. It seems like memory leak for me there or / and un-interrupted threads. Like something keep working on BG and results of it are ignored. Such a thing could happen like when you send interrupt thread signal, but don't join it, so it will stop somewhen later maybe.

The system I was able to isolate a CPU limitation on was reporting ~16% average utilization and didn't have any individual cores past 70-80%, according to task manager, process explorer, etc. The latter was a reporting anomaly caused by peaks being averaged out across the polling interval.

I still haven't seen any clear evidence for a memory leak. Surely the game uses more than it should, probably for the same reasons that are the source of other performance issues, but it doesn't seem to grow out of control, or have problems releasing memory that's been allocated.

This would match my own experiences, although I did not do such in-depth testing. My CPU seems to be almost constantly at 100% on planetary bases and in the concourse, while my graphics card is barely ever running at full capacity.

The CPU as a whole, or any logical core, being pegged near 100% is an obvious CPU limitation.

Right now, my ED:O client is seeing good FPS, after the patch 2 nights ago. At the same time, I see others continuing to complain of low FPS inside buildings, even after the patch. I was confused about how that might be, but then I saw this and remembered -- my BIOS is modified to disable CPU power management, and my CPU is locked at 4ghz. Could this be why, right now, I have good performance (60FPS at 4k, ultra, ultraforcapture) while others are still complaining about this???

I haven't seen either SMT or CPU power management features, at least where something wasn't clearly misconfigured, significantly impact performance of the game. I couldn't guess what's going on with those citing major performance drops with similarly high-end hardware.
 
I am playing Borderlands 3 right now and the difference in immersion and fps gameplay as compared to Odyssey is incredibly painful.

Whoever made the decision to bolt FPS onto ED is . . . not FD's MVP.

Pre-launch, when people like myself were predicting EDO was going to be a low tier FPS with some minor bits dangling off so it can claim it's not just an FPS, we were met by scoffs that FD is holding back content.

"There's gonna be so much more content! It's not an FPS so it doesn't matter the FPS content is low tier, it doesn't have to compete as an FPS!"

Here we are, with the FPS that is EDO, looking like it came straight out of the 90s and getting panned. But at least we have a handful of static cactii to stare at. Totally not just a poor FPS. 😉👌

And nobody saw it coming . . .
 
I haven't seen either SMT or CPU power management features, at least where something wasn't clearly misconfigured, significantly impact performance of the game. I couldn't guess what's going on with those citing major performance drops with similarly high-end hardware.
Actually, after a few more hours of playtime last night, I started seeing a larger pattern. The FPS issues do come back periodically on my system. Its 80% better than before, but they do indeed return eventually.

I mentioned the hyperthreading / scheduler / multimedia threading issue because for a moment, it appeared that this one configuration change to my system was setting my system apart from other systems still experiencing the lower FPS. It is indeed possible for interference to occur in sensitive or overloaded/inefficient rendering pipelines by way of poor OS and/or hardware CPU states implementation, GPU driver changes, WDDM changes via windows updates, etc etc.
 
However, looking at things more closely, I did find spikes of CPU utlization well in excess of what's reported here, that were simply hidden by an insufficiently fast polling rate and rapid changes to where some threads were being scheduled.
I was pretty sure that CPU utilization was/is being under reported on my system. I was using the built-in Windows performance monitor so I wasn't convinced that it was polling fast enough.

After upgrading the CPU and motherboard the GPU utilization went through the roof. Yet I saw very little improvement in performance.

Battlefield was showing similar issues but the upgrade brought nice rise in performance. I stopped looking for the reason because until Frontier fixes it, there's nothing I can do about it.

Nice find Morbad.
 
Back
Top Bottom