RTX 5090 still poor performance on planets and in stations, looking for settings assistance

Yes I know Yet Another VR Best Settings thread... pretty sure we should come up with a catchy acronym for these.

So I will start off with apologising for creating another one of these but I cannot for the life of me work out why my performance is still so poor, I've tried the various settings and recommendations from all of the numerous other threads on this subject and tried multiple different ways of getting the image from my PC to my headset with no real joy. There's obviously something I'm missing or some setting that I've changed that I shouldn't have I just haven't been able to find it.

Specs:
Headset: Quest 3
GPU: RTX 5090
CPU: AMD Ryzen 9 5900X
RAM: 32GB of 3200MT DDR4
Install SSD: 4TB PCIE4 Lexar NM790
Windows 11 Pro 24H2
ASW off permanently as I hate it with a passion

I have tried Air Link over Wifi and via cable, I've also tried Virtual Desktop over WiFi and also via a USB C ethernet adapter wired.
I get the best framerates from Air Link using the Oculus Runtimes only but the image quality is way worse than using AV1 with Virtual Desktop.
I have also re-installed and reset all settings multiple times, I've also tried installing both the Frontier version and the Steam version with no difference in performance that I can see.

Oddly I get worse performance with VDXR and opencomposite with Virtual Desktop compared to Steam VR with Virtual Desktop, totally contrary to most of the reports I've read on here.

In game settings have all been adjusted in every way possible, with a 5090 I should be able to have everything on ultra but that just isn't possible unless in space only.

Current Settings:
Model Draw Distance: Maxed all the way
Texture Quality: High
Texture Filter Quality: Anisotropic X8
Directional Shadow Quality: Ultra (I've tried this with Medium with no difference)
Spot Shadow Quality: Ultra (I've tried this with Medium with no difference)
Bloom: Off because I'm not a monster...
Blur: As above
Anti-Aliasing: Off (set to X8 in the Nvidia Control Panel)
Supersampling: X1.0
Upscaling: Off/Normal
Ambient Occlusion: High
Environment: Ultra
FX Quality: High
Particle Effects: Ultra
Depth of Field: Off
Material Quality: Ultra
HMD Image Quality: x1.25
Galaxy Map: High
Terrain Quality: Ultra+ (I have also tried this as low as Medium with no effect)
Terrain LOD Blending: Ultra (I have also tried this as low as Medium with no effect)
Terrain Work: Maxed all the way to the right
Terrain Material Quality: Ultra (also tried Medium)
Terrain Sampler Quality: Ultra (also tried Medium)
Terrain Checkerboard Rendering: Off (also tried with On)
Jet Cone Quality: Ultra
Volumetric Effects Quality: Ultra (also tried Medium)

With native Oculus I get solid 90 FPS in space with occasional hitches (not the stutter bug, this is occasional hitching when moving my HOTAS and my head at the same time for one or two frames), I get around 85-90 FPS on planets with more frequent hitches and the usual poor draw distances and pop in and don't even get me started on the terrain generation as I'm driving around it's very distracting to see rocks and hillocks grow in front of you... In stations even the Oculus runtime suffers with around 75-85 FPS and very frequent hitches.
If it wasn't for the extremely soft (some may even say bit starved) visuals with Oculus I would run with this all of the time as the performance is just better, I've tried setting the bitrate to 400mbit in the Oculus Debug Tool but that just makes the image unusable, it turns it in to a slideshow.

Virtual Desktop is where things get odd... First I will say that it makes no difference if I am Wired via an ethernet to USB C adapter or via WiFi (6GHz dedicated Router).
Virtual Desktop Settings:
AV1, Adaptive Quantization On, I have tried with 2 Pass encoding on and off with no difference.
OpenXR Runtime: Set to VDXR but it still uses Steam VR, Opencomposite with or without OpenXR Toolkit gives much worse performance, to the tune of being 10-15FPS lower than SteamVR
I have tried lowering the FOV Tangents to 95% again with no difference.

Virtual Desktop is also where I see some glaring differences with a lot of what I've seen reported on these forums. I've seen reports of some users setting the Steam VR render resolution to 400% and having a "perfect" experience, I've had to set mine to 88% to get it even playable (that's with VD set to Godlike, so it shows in the VD Performance Overlay as a 108% render resolution).

With Virtual Desktop I get similar results to the Oculus Runtime in Space, ie pretty solid 90FPS, however the GPU utilisation is about 15% higher according to Task Manager. On Planets I also get similar 82-90 FPS with occasional drops as low as 75 FPS, again with approximately 15% higher GPU usage. In stations the performance is way worse sometimes going below 70FPS and a sustained less than 80 FPS.

The weird thing is at no time does the GPU go above 80% utilisation according to Taskmanager and HWInfo64 even when in a station. CPU usage is also minimal never going about 25% utlisation (I've also tried using Process Lasso to set the CPU core affinity to CCD1).

I can't think of anything else to try so any recommendations would be appreciated, I'm sure I've missed a load of stuff I've tried off this post, which is already way too long anyway.

TL : DR I have a 5090 and the performance is still not great any recommendations would be appreciated.
 
Last edited:
Have you done any benchmarking (3DMARK) of your PC setup, to see if your system settings are above, below or about normal?
 
Have you done any benchmarking (3DMARK) of your PC setup, to see if your system settings are above, below or about normal?
Yeah, I spent ages doing all sorts of tests and comparisons between my games with the 3080 that the 5090 replaced, I probably spent two days doing various in game and synthetic benchmarks prior to upgrading the card.

In everything except ED the 5090 performs as I would expect, more than doubling the performance of my 3080 in a lot of cases. The uplift with ED is barely 10-15% over the 3080 in VR.

In space the 5090 barely runs at 40% utilisation, that doubles on planets and in stations, but it's never anywhere near 100% utilised. HWInfo64 shows it never throttles for performance, heat or power during gameplay, whereas in all my other games where I don't set a cap on the FPS it hits either power or utilisation limits.
 
Yeah, I spent ages doing all sorts of tests and comparisons between my games with the 3080 that the 5090 replaced, I probably spent two days doing various in game and synthetic benchmarks prior to upgrading the card.

In everything except ED the 5090 performs as I would expect, more than doubling the performance of my 3080 in a lot of cases. The uplift with ED is barely 10-15% over the 3080 in VR.

In space the 5090 barely runs at 40% utilisation, that doubles on planets and in stations, but it's never anywhere near 100% utilised. HWInfo64 shows it never throttles for performance, heat or power during gameplay, whereas in all my other games where I don't set a cap on the FPS it hits either power or utilisation limits.
I'm not supprised the 5090 is not anywhere near 100% utilisation, you are probably CPU / RAM bottlenecked, ie they can't keep up with the GPU.

From recollection my 4090 is never 100% utalised and I'm running a Ryzen 9 7950x3d with 6000MTs DDR5 RAM
 
I'm not supprised the 5090 is not anywhere near 100% utilisation, you are probably CPU / RAM bottlenecked, ie they can't keep up with the GPU.

From recollection my 4090 is never 100% utalised and I'm running a Ryzen 9 7950x3d with 6000MTs DDR5 RAM
I was thinking about possibly the RAM not keeping up, the CPU however never gets over 25% utilised, I'm checking that on individual cores, not just the Taskmanager reported usage, so it's not 1 or 2 cores being pinned, none of them go over 25%.

I'm interested to see the settings of these people who claim to run Steam VR's render settings at 400% with no issues, I'm sure there's either something I'm missing with what they're saying or the Steam VR settings are some sort of sliding scale, as I mentioned I have to put mine at 88% to be able to run ED at all. I'd like to see exactly what their VD settings as well as their driver settings (I checked al the settings with the Nvidia Profile Inspector to be sure I hadn't set something stupid) are to compare apples to apples instead of the apples to onions I think I currently am.
 
I've never used Virtual Desktop, have very limited experience with VDXR or OpenComposite, and my ED VR experience is mostly limited to my old WMR headset with the SteamVR runtimes (installed without Steam). That said, I think I can draw some more general performance inferences here.

I was thinking about possibly the RAM not keeping up, the CPU however never gets over 25% utilised, I'm checking that on individual cores, not just the Taskmanager reported usage, so it's not 1 or 2 cores being pinned, none of them go over 25%.

There is no way your system is not profoundly CPU and memory subsystem bottlenecked doing what you're doing. My 5800X (a single eight-core CCX which is better for this game than two hex-core CCXes) with much faster memory than what you're using produces litterally half the frame rate, or less, of my more modern systems in many non-GPU limited scenes (and if the GPU utilization is not pegged at the upper 90s percentages, it's not the limiting factor). Aggregate CPU utilization says almost nothing, as most of the game runs on two threads. It's loads are also highly transient, so spikes that are reflective of a bottleneck get averaged out by anything that isn't tracking utilization with a very rapid polling rate. On top of all this, all these VR runtimes and applications have overhead of their own, which is surely not helping.

Anti-Aliasing: Off (set to X8 in the Nvidia Control Panel)

This does nothing in most remotely modern titles, including Elite: Dangerous. That said, it's not the cause of any of your issues, nor are any of your graphics settings, in all probability.

Virtual Desktop is where things get odd... First I will say that it makes no difference if I am Wired via an ethernet to USB C adapter or via WiFi (6GHz dedicated Router).
Virtual Desktop Settings:
AV1, Adaptive Quantization On, I have tried with 2 Pass encoding on and off with no difference.
OpenXR Runtime: Set to VDXR but it still uses Steam VR, Opencomposite with or without OpenXR Toolkit gives much worse performance, to the tune of being 10-15FPS lower than SteamVR
I have tried lowering the FOV Tangents to 95% again with no difference.

I'm assuming the AV1 encoding is done via NVENC (your 5090) here?

Using nvidiaProfileInspector to disabled CUDA P2 state might help performance very slightly, but as you don't seem to be GPU bound, it's unlikely to be noticeable.

I've seen reports of some users setting the Steam VR render resolution to 400% and having a "perfect" experience, I've had to set mine to 88% to get it even playable (that's with VD set to Godlike, so it shows in the VD Performance Overlay as a 108% render resolution).
I'm interested to see the settings of these people who claim to run Steam VR's render settings at 400% with no issues

Those reports are almost certainly in regard to headsets with much lower hardware resolution and/or heavy use of upscaling, or possibly just in the least demanding portions of the game (open space can easily produce ten times the frame rate of the most demanding on-foot scenes). 108% final render resolution on a Quest 3 is roughly 5k equivalent...which is about where an RTX 5090 can be expected to maintain 90 fps minimums at ultra settings in this game, in more graphically intensive scenes.
 
For what it's worth, I am also running a 5900X with 32GB@3600 and a 3080 Ti driving a Reverb G2@90Hz, and with my given settings I mostly get bottlenecked by the CPU than by the GPU, at least if I believe my G2's indicator. Also I don't have the additional transcoding overhead as the G2 is driven by display port.

A stronger GPU might allow me to push my resolution further, but ED is only more or less mediocre on this CPU. I don't know if disablling one CCX and possibly allowing it to draw more power might help, I've not tried it yet.

I bought the 5900X more as a general purpose workstation than a gaming CPU before the X3D came out. As a workstation it is great, as a gaming CPU it's probably only so-so.
 
I do think the 5900x and DDR4 is limiting / not helping ED performance, particularly if you push game quality settings to Ultra / High and HMD image quality to x1.25
I had similar issues with my 5950x.
But we also know generally ED is not the most optimised game, it only seems a high end CPU / RAM & GPU combo can horse power to high frame rates particularly with the demands of VR, just having a high end GPU doesn't always get the results you expect.
 
Last edited:
Thanks everyone for your insights, it's food for thought.

I bought the 5900X on launch day (didn't want to spend the extra £200ish for the 5950X), so the X3D wasn't even a twinkle in the eye then :D

I'm still at a loss to say how it's CPU bottled when none of the cores get over 25% (confirmed with HWInfo64 set running after game has loaded, the max any of the cores hit is 25%), but as I think we all agree the Cobra engine is extremely poorly optimised.

Unfortunately we've just been informed we're losing our jobs (thanks Wes Streeting you gormless <unprintable>) so there's no upgrade on the cards just yet...
 
Thanks everyone for your insights, it's food for thought.

I bought the 5900X on launch day (didn't want to spend the extra £200ish for the 5950X), so the X3D wasn't even a twinkle in the eye then :D

I'm still at a loss to say how it's CPU bottled when none of the cores get over 25% (confirmed with HWInfo64 set running after game has loaded, the max any of the cores hit is 25%), but as I think we all agree the Cobra engine is extremely poorly optimised.

Unfortunately we've just been informed we're losing our jobs (thanks Wes Streeting you gormless <unprintable>) so there's no upgrade on the cards just yet...
Same for me, the X3D didn't exist when I got my 5900X.

@Morbad is more the expert here, but make sure your monitoring frequency is high enough to actually catch load spikes. From what I have observed, CPU loads are very dynamic and do jump from core to core / thread to thread quite a bit on my 5900X. So you might not be seeing the full picture.
 
Its probably more to do with the AM4 CPU / RAM architecture throughput rather than CPU load.
I think AM5 has the RAM memory controller on the RAM (stand to be corrected here) as well as being faster overall.
Probably worth considering going for an AM4 x3D CPU upgrade to keep costs down.
 
I know this is kind of apples to oranges, but just as a data point and to demonstrate how easy Elite can be CPU limited on the 5900X: I've just done a little test. This is just in 2D at 1080p, but with my VR settings, so it's kind of easy on the GPU.

I am sitting in a Coriolis with vsync and frame rate limiter disabled, and my frame rate is limited to around 150 to 165 fps - it fluctuates a bit. The GPU is boring itself at roughly 50 to 60% load, so it could probably push nearly twice the frames every second. Yet the CPU can't. My overall CPU load shows around 20%, and HWINFO shows none of the cores fully loaded, yet it is clearly a CPU limited scenario.

I've checked, the polling frequency in HWINFO is set to 1000ms, and I don't think I ever changed that, so that might be the default. This is way to long to give an accurate picture of the CPU load.

Now again, this is in 2D in 1080p. Now imagine the additional overhead of the VR pipeline, various API transistions, rendering two viewports at a higher resolution, and transcoding and streaming it to the Quest (is this done on the CPU or the GPU?). Taking all that into account, 150 fps CPU limited is scarily close to the 90 fps I would need in VR.

Edit to add: After leaving the station and jumping to supercruise, I am pushing almost 400 frames per second. /Edit

TL;DR: Elite is really tough on the CPU. Much more so in populated settlements - this is where it totally tanks for me.
 
Last edited:
I'm still at a loss to say how it's CPU bottled when none of the cores get over 25% (confirmed with HWInfo64 set running after game has loaded, the max any of the cores hit is 25%), but as I think we all agree the Cobra engine is extremely poorly optimised.

What polling rate is HWiNFO set to?

Utilization figures are the percentage of non-idle cycles over a given polling period. If the game uses every cycle it's given for ten milliseconds, then sits there waiting for memory accesses, or another thread to do what it needs to do, for another ten ms, this is going to show up as 50% utilization. More CPU performance would still reduce frame times/increase frame rate, though perhaps not as much as better memory subsystem performance.

The transient nature of EDO's CPU load significantly obfuscates it's performance issues and I ended up having to use ETW tools like Xperf/GPUView to get sufficiently fine-grained polling data to illustrate my experiences across different CPUs/platforms.

Source: https://www.youtube.com/watch?v=dA17aO4tv_s


There are clearly periods where logical cores are being fully loaded by EliteDangerous64.exe (12:00 or thereabouts is a good example), but they are so intermittent that average utilization over any significant polling period is low.

Some of my earlier investigations with more basic tools:

Same for me, the X3D didn't exist when I got my 5900X.

@Morbad is more the expert here, but make sure your monitoring frequency is high enough to actually catch load spikes. From what I have observed, CPU loads are very dynamic and do jump from core to core / thread to thread quite a bit on my 5900X. So you might not be seeing the full picture.

Since the Odyssey Alpha my primary system has gone through a 3900X, 5800X, 5800X3D, and a 9800X3D (with a few month gap where I was playing semi-frequently on a 7800X3D system I built for my brother). Every CPU swap there was a major performance uplift. The game es away a lot of time waiting for memory and waiting for worker threads, so more cache, better locality (fewer CCXes), and faster RAM, all help a ton. Pure CPU core performance, by minimizing the the time the game actually spends doing things so it can go back to waiting around for something else faster, also helps.

I think AM5 has the RAM memory controller on the RAM (stand to be corrected here) as well as being faster overall.

AM4 and AM5 platforms have similar memory subsystem topologies. AM5 simply has a newer, faster, memory standard, and a slightly faster Fabric interconnect. The memory controller is on package on both (on the monolithic die of the APU parts, on the IOD of the standard/non-G parts).

I've checked, the polling frequency in HWINFO is set to 1000ms, and I don't think I ever changed that, so that might be the default. This is way to long to give an accurate picture of the CPU load.

HWiNFO defaults to 2000ms and has too much polling overhead to be useful much below 250ms or so. ED's spikes are very hard to pick out even at the lower value.

This why I resorted to Xperf...it taps into low level Windows event tracing, has almost no overhead (well, it burns through memory pretty quickly when logging many parameters), and sub-millisecond granularity.
 
I am almost toying with the idea to get a 5800X3D for comparison. Amazon still has them in stock for less than 140 bucks, which almost sounds like a limited risk... but it's still 140 bucks. Not sure it's worth it for an experiment on a platform that is definitely end of life.
 
Oh and for added transparency: I am running my 5900X with 95W values for POB2 right now, because it's just too darn hot at the moment. It's possible you can squeeze some better performance out of it optimizing this.
 
I am almost toying with the idea to get a 5800X3D for comparison. Amazon still has them in stock for less than 140 bucks, which almost sounds like a limited risk... but it's still 140 bucks. Not sure it's worth it for an experiment on a platform that is definitely end of life.
Probably is worth it, 5800x3d has a good reputation.
Amazon's return policy could be exploited if you don't like it.
 
I am almost toying with the idea to get a 5800X3D for comparison. Amazon still has them in stock for less than 140 bucks

Do you have a link for that? The 5800X3D is in limited supply and typically much more expensive at this point. ~140 sounds like a sale price on a non-X3D.

Oh and for added transparency: I am running my 5900X with 95W values for POB2 right now, because it's just too darn hot at the moment. It's possible you can squeeze some better performance out of it optimizing this.

There are a lot of little software (optimizing one's power plan/scheduling, tuning AppConfig.xml, enabling large memory pages, etc) and some more impactful firmware (e.g. undervolting, tuning memory) optimizations that can be done almost irrespective of what hardware one is using, but the big gains will need new hardware, preferably a more modern platform.

Frontier really should have pushed for an engine overhaul for Odyssey that would have addressed the game's performance and IQ issues, as well as allowed them to retain EDO on consoles, plus more robust VR support. The game is built on a very shaky technical foundation.
 
Do you have a link for that? The 5800X3D is in limited supply and typically much more expensive at this point. ~140 sounds like a sale price on a non-X3D.
Yes, you're right. I got confused, good thing you made me check twice. Sorry. That experiment is off now :D.

There are a lot of little software (optimizing one's power plan/scheduling, tuning AppConfig.xml, enabling large memory pages, etc) and some more impactful firmware (e.g. undervolting, tuning memory) optimizations that can be done almost irrespective of what hardware one is using, but the big gains will need new hardware, preferably a more modern platform.
Yeah you can fiddle 'til the cows come home. I usually don't do that, I'm more the "that'll do" type.

Frontier really should have pushed for an engine overhaul for Odyssey that would have addressed the game's performance and IQ issues, as well as allowed them to retain EDO on consoles, plus more robust VR support. The game is built on a very shaky technical foundation.
Who knows why they didn't.
 
Back
Top Bottom