RTX 5090 still poor performance on planets and in stations, looking for settings assistance

I know this is kind of apples to oranges, but just as a data point and to demonstrate how easy Elite can be CPU limited on the 5900X: I've just done a little test. This is just in 2D at 1080p, but with my VR settings, so it's kind of easy on the GPU.

I am sitting in a Coriolis with vsync and frame rate limiter disabled, and my frame rate is limited to around 150 to 165 fps - it fluctuates a bit. The GPU is boring itself at roughly 50 to 60% load, so it could probably push nearly twice the frames every second. Yet the CPU can't. My overall CPU load shows around 20%, and HWINFO shows none of the cores fully loaded, yet it is clearly a CPU limited scenario.

I've checked, the polling frequency in HWINFO is set to 1000ms, and I don't think I ever changed that, so that might be the default. This is way to long to give an accurate picture of the CPU load.

Now again, this is in 2D in 1080p. Now imagine the additional overhead of the VR pipeline, various API transistions, rendering two viewports at a higher resolution, and transcoding and streaming it to the Quest (is this done on the CPU or the GPU?). Taking all that into account, 150 fps CPU limited is scarily close to the 90 fps I would need in VR.

Edit to add: After leaving the station and jumping to supercruise, I am pushing almost 400 frames per second. /Edit

TL;DR: Elite is really tough on the CPU. Much more so in populated settlements - this is where it totally tanks for me.
Actually that kind of makes sense, there is obviously something in stations and on planets that is smashing the IO somewhere.
What polling rate is HWiNFO set to?
HWiNFO is set to 1000ms now that I think about it that is rather high for this sort of monitoring, never really thought about it before.
Since the Odyssey Alpha my primary system has gone through a 3900X, 5800X, 5800X3D, and a 9800X3D (with a few month gap where I was playing semi-frequently on a 7800X3D system I built for my brother). Every CPU swap there was a major performance uplift. The game es away a lot of time waiting for memory and waiting for worker threads, so more cache, better locality (fewer CCXes), and faster RAM, all help a ton. Pure CPU core performance, by minimizing the the time the game actually spends doing things so it can go back to waiting around for something else faster, also helps.
Now you're just making me want to spend money I really can't at the moment :D mind you that sweet redundancy money may need to be put to good use :p

Every day is a learning day
Indeed and I like to use the follow on to that, any day you learn something is a day not wasted.
Frontier really should have pushed for an engine overhaul for Odyssey that would have addressed the game's performance and IQ issues, as well as allowed them to retain EDO on consoles, plus more robust VR support. The game is built on a very shaky technical foundation.
That's been fairly obvious for some time now, I'll be honest I'd rather they reworked the whole game and make it a new payable DLC that brings the engine up to modern standards than them keep adding stuff to an engine that's already creaking at the seams.
 
Actually that kind of makes sense, there is obviously something in stations and on planets that is smashing the IO somewhere.
I don't really know what it is planetside, but NPCs are really a pain. My framerate and experience practically immediately recovers and improves the moment the last NPC at a settlement.... disappears.
 
I'm still at a loss to say how it's CPU bottled when none of the cores get over 25% (confirmed with HWInfo64 set running after game has loaded, the max any of the cores hit is 25%), but as I think we all agree the Cobra engine is extremely poorly optimised.
The simplest way to answer the question of "how can my CPU be the bottleneck when no cores are showing as maxed out?" is probably to ask you to think about the case when an app has just a single thread, which is running constantly and hops from one core to another at fairly high frequency (faster than the monitoring software window, see @Morbad's posts above). On an 4-core machine (without HT), each individual core would be showing as being only 25% busy (the CPU as a whole would also of course be showing as only 25% busy) and yet the speed of that thread is absolutely bottlenecked by the CPU (by definition, because it's running constantly).
The transient nature of EDO's CPU load significantly obfuscates it's performance issues and I ended up having to use ETW tools like Xperf/GPUView to get sufficiently fine-grained polling data to illustrate my experiences across different CPUs/platforms.
I've barely ever touched the ETW stuff - fairly complicated to say the least :) Would you recommend it for this kind of investigation? I seem to recall I ran out of steam last time while only part-way up the learning curve, lol.

My own investigations to gaming bottlenecks (ED and others) have generally been clearly answered by one of the "more basic tools" :) you mentioned - Process Explorer, and checking the thread utilisations. Simply put, if any thread consistently has utilisation at or above 85-90% of the maximum it can reach (e.g. 12.5% is the max for an 8-core CPU, so anything from the high 10% upwards) then I'm inclined to infer that the thread is likely CPU-bottlenecked or on the verge of being so.
 
I don't really know what it is planetside, but NPCs are really a pain. My framerate and experience practically immediately recovers and improves the moment the last NPC at a settlement.... disappears.

It's not as extreme on faster setups, but the moment a CZ battle is won, frame time spikes disappear and frame rates go up modestly.

This is a test I recorded almost exactly two years ago on a 7800X3D:
Source: https://youtu.be/QgWNOU5Kusg?t=560


I've barely ever touched the ETW stuff - fairly complicated to say the least Would you recommend it for this kind of investigation? I seem to recall I ran out of steam last time while only part-way up the learning curve, lol.

My own investigations to gaming bottlenecks (ED and others) have generally been clearly answered by one of the "more basic tools" you mentioned - Process Explorer, and checking the thread utilisations. Simply put, if any thread consistently has utilisation at or above 85-90% of the maximum it can reach (e.g. 12.5% is the max for an 8-core CPU, so anything from the high 10% upwards) then I'm inclined to infer that the thread is likely CPU-bottlenecked or on the verge of being so.

In general, basic process of elimination will suffice. If the GPU isn't fully loaded and isn't running out of VRAM, the bottleneck is probably somewhere else. 'Somewhere else', in a game that has negligible disk or network I/O, means the CPU or memory subsystem. Separating a CPU bottleneck from a memory bottleneck is harder, but not seeing any core approach full load strongly suggests a memory bottleneck (without ruling out a CPU one).

Confirmation can be had via more detailed tools, including, but not limited to, ETW analysis tools like Xperf from the Windows SDK, NVIDIA Nsight, and/or AMD's RGA.

Regardless, I would strongly recommend including CapFrameX in one's 'basic' tools.
 
I would second that your CPU is not helping as I am on an RTX 4080S, Ryzen 7 9800X3D, 64gb RAM and I can get to a higher Supersampling + even HMD.
 
Steam resolution 4312x5104 (100%) ingame ss 1.0 hmd 1.0
Pimax openXR @120hz
Am5 7800x3d cpu with a 5080 and 64 gig ddr5 @ 6000hz
VRfps which floats right under my mouth with headset on shows my fps which in space is 120fps.
As I drop into a planet its 45 to 60fps. In space stations roughly the same.
But its silky smooth.
My cpu load is approx 20 to 30%
Gpu load apart from open space is 90%+
In game settings almost the same as Op except directional shadows is high and AA is smaa.
Now here's the weird bit. In space VRAM usage is 9 to 11 gig. In planetary or space stations it spikes to as high as 14.3 to 15+ gig. Net result is just once = jitter for about 1 minute...just once per session then VRAM doesn't Spike anymore and no more jitter.
Occasionally it crashes to desktop which I think is temperature related.
All this information is right in front of me so its accurate. Unless fpsVR isn't reporting it right?
I also have HDR and autoHDR on. Monitor set to 144hz.
I also read something about not settling power to high performance so that's default.
As for the foveated rendering yup that's all on. Smooth off.
Hope this shines a light!
Just my opinion but I think the game is still an unoptimised mess, especially in VR.

Oh I forgot the cores!
16 cores all working but the ones on the left in fpsVR depiction are always higher than the right & at 50 to 65%
basically a sliding scale.
 
Last edited:
I'm getting very good performance in stations and SRV with Quest 3 running at 1.4x render resolution and 90Hz (every graphic setting maxed to highest level) with my 4090, 13th gen Intel Core i9 (liquid cooled), 64GB Ram (5200MHz) and 2TB M.2 PCle NVMe SSD. I am no tech expert, but I'll offer my approach. I'm using USB-C 3.2 Gen 2x2 cable (100W, 20Gbs) with Oculus software. Steam VR runs slower. The version of usb-c cable matters as well as your PC's usb-c port. As long as your pc port is at least 10Gbps I think you should be good. My Quest 3 USB-C transfer rate is around 6Gbps.

My main settings: (1) I set Nvidia 3D global settings for VR on pc to pre-render 4 frames. This helps a lot in all my PCVR games. I also set Nvidia to have max of 90fs for VR. (2) I have my Oculus debug encode bitrate set to 850Mbps (this helps the most). I believe you won't get good performance at 450Mbps no matter your PC's GPU or power. If you can't go higher than 450 pursue WIFI 6 or 7 with virtual desktop. These settings are very stable for me in iRacing, ACC, AMS2 and Elite Dangerous. I get occasional black screens if I bump encode bitrate up to 900 or 950Mbps. This setting depends on your pc and cable speed. (3) Set Oculus Debug FOV-Tangent Multiplier to 0.8 and 0.8. This also helped a lot since it reduces the FOV. I don't see any reduction in my visible FOV with this. If I go to 0.7 I can see a slight black cut off on the bottom or sides. Whether you see the black cutoff may depend on how close your Quest 3 is to your eyes. (4) Within Oculus debug, disable Asynchronous Space warp. If it activates, it will drop your fps to 45.

I hope one of these suggestions helps you. I should mention I also get smooth performance in SRV but I also see rock detail forming in the distance. I haven't found a way to increase detail drawing distance in Elite. Good luck commander.
 
Last edited:
So little follow on from the tests and changes I've made after the really useful info in this thread, and I'm also not afraid to admit when I was wrong.

I can confirm that the issue is most definitely that I'm CPU bound.

I stopped playing around with settings and tried a dirty little CPU overclock, I was previously running 100% stock with AMD's PBO turned off. I used to do a lot of video encoding on this machine and considered the risk to be too great with any overclock on my processor.

After a conservative -5 undervolt on all cores and a +50Mhz OC on all cores I got an instant boost to framerates, still rock solid 90 in space of course but I'm now getting consistent 90 on most planets with only the occasional dip in to the mid 80's.

Stations are still not great with mid 80's dipping to 77 FPS at points but are still better than before.

I think I'm now going to spend some time getting the best stable OC I can get (unfortunately I lost the silicon lottery with my RAM it will only do its rated 3200MTs can't do any OC on it at all), that will tide me over till I can save the money for the 9800X3D and all the associated gubbins I will need for it...
 
I can confirm that the issue is most definitely that I'm CPU bound.
I'm not convinced you're wrong. If there's a problem somewhere, being mitigated by overclocking your CPU, it doesn't defacto make your CPU 'the' problem.

The only reason I say this is, before my Pico 4U broke, I was happily playing at SteamVR 400%, with similar settings to you ( except driver pre-rendered frames at '1' not '4' because '4' causes a lot more latency for me ) and SS and HMD both at x1.0. Now the Pico is gone and I've just been using my G2 till I can be bothered to setup my Q3, the performance is rubbish. I'm on a 12900K, 5090, 64GB DDR4 ( all stock ), so your system should be faster than mine anyway.

So I know that EDO can be smooth ( at least to my eyes ) at 400% SteamVR, ergo whatever is wrong for me, is software. I know the Blackwell drivers are :poop: SteamVR gives me the most consistent behaviour but, obviously, is the most clunky to use with all the Steam faff to start the game. OpenComposite is the most performant but doesn't like Win 11 ( at least for me ) so I went back to Win 10 ( which has half the bloatware anyway ).

For me VD and PicoConnect trade blows for 'performance', regardless of settings, except that the PicoConnect is always better when it comes to wireless. More recently I would've said PicoConnect has well and truly caught up with VD overall and was preferable. It even has a secret setting for OpenXR support.

However, I also note that performance levels which I am comfortable with, seem to be far lower than for other people, no idea why - maybe because I've been using VR since its beginnings when 100ms latency was pretty darn good and nausea was part of the immersion.

QED, my 'perception' might make my comparison irrelevant. That being said I'm actually going to setup the Q3 and give it a go. Can't believe how bad the G2's 'toilet tube' vision is compared with newer headsets, even if the sweet spot is every bit as sharp as for my Pico.
 
Last edited:
Yes I know Yet Another VR Best Settings thread... pretty sure we should come up with a catchy acronym for these.

So I will start off with apologising for creating another one of these but I cannot for the life of me work out why my performance is still so poor, I've tried the various settings and recommendations from all of the numerous other threads on this subject and tried multiple different ways of getting the image from my PC to my headset with no real joy. There's obviously something I'm missing or some setting that I've changed that I shouldn't have I just haven't been able to find it.

Specs:
Headset: Quest 3
GPU: RTX 5090
CPU: AMD Ryzen 9 5900X
RAM: 32GB of 3200MT DDR4
Install SSD: 4TB PCIE4 Lexar NM790
Windows 11 Pro 24H2
ASW off permanently as I hate it with a passion

I have tried Air Link over Wifi and via cable, I've also tried Virtual Desktop over WiFi and also via a USB C ethernet adapter wired.
I get the best framerates from Air Link using the Oculus Runtimes only but the image quality is way worse than using AV1 with Virtual Desktop.
I have also re-installed and reset all settings multiple times, I've also tried installing both the Frontier version and the Steam version with no difference in performance that I can see.

Oddly I get worse performance with VDXR and opencomposite with Virtual Desktop compared to Steam VR with Virtual Desktop, totally contrary to most of the reports I've read on here.

In game settings have all been adjusted in every way possible, with a 5090 I should be able to have everything on ultra but that just isn't possible unless in space only.

Current Settings:
Model Draw Distance: Maxed all the way
Texture Quality: High
Texture Filter Quality: Anisotropic X8
Directional Shadow Quality: Ultra (I've tried this with Medium with no difference)
Spot Shadow Quality: Ultra (I've tried this with Medium with no difference)
Bloom: Off because I'm not a monster...
Blur: As above
Anti-Aliasing: Off (set to X8 in the Nvidia Control Panel)
Supersampling: X1.0
Upscaling: Off/Normal
Ambient Occlusion: High
Environment: Ultra
FX Quality: High
Particle Effects: Ultra
Depth of Field: Off
Material Quality: Ultra
HMD Image Quality: x1.25
Galaxy Map: High
Terrain Quality: Ultra+ (I have also tried this as low as Medium with no effect)
Terrain LOD Blending: Ultra (I have also tried this as low as Medium with no effect)
Terrain Work: Maxed all the way to the right
Terrain Material Quality: Ultra (also tried Medium)
Terrain Sampler Quality: Ultra (also tried Medium)
Terrain Checkerboard Rendering: Off (also tried with On)
Jet Cone Quality: Ultra
Volumetric Effects Quality: Ultra (also tried Medium)

With native Oculus I get solid 90 FPS in space with occasional hitches (not the stutter bug, this is occasional hitching when moving my HOTAS and my head at the same time for one or two frames), I get around 85-90 FPS on planets with more frequent hitches and the usual poor draw distances and pop in and don't even get me started on the terrain generation as I'm driving around it's very distracting to see rocks and hillocks grow in front of you... In stations even the Oculus runtime suffers with around 75-85 FPS and very frequent hitches.
If it wasn't for the extremely soft (some may even say bit starved) visuals with Oculus I would run with this all of the time as the performance is just better, I've tried setting the bitrate to 400mbit in the Oculus Debug Tool but that just makes the image unusable, it turns it in to a slideshow.

Virtual Desktop is where things get odd... First I will say that it makes no difference if I am Wired via an ethernet to USB C adapter or via WiFi (6GHz dedicated Router).
Virtual Desktop Settings:
AV1, Adaptive Quantization On, I have tried with 2 Pass encoding on and off with no difference.
OpenXR Runtime: Set to VDXR but it still uses Steam VR, Opencomposite with or without OpenXR Toolkit gives much worse performance, to the tune of being 10-15FPS lower than SteamVR
I have tried lowering the FOV Tangents to 95% again with no difference.

Virtual Desktop is also where I see some glaring differences with a lot of what I've seen reported on these forums. I've seen reports of some users setting the Steam VR render resolution to 400% and having a "perfect" experience, I've had to set mine to 88% to get it even playable (that's with VD set to Godlike, so it shows in the VD Performance Overlay as a 108% render resolution).

With Virtual Desktop I get similar results to the Oculus Runtime in Space, ie pretty solid 90FPS, however the GPU utilisation is about 15% higher according to Task Manager. On Planets I also get similar 82-90 FPS with occasional drops as low as 75 FPS, again with approximately 15% higher GPU usage. In stations the performance is way worse sometimes going below 70FPS and a sustained less than 80 FPS.

The weird thing is at no time does the GPU go above 80% utilisation according to Taskmanager and HWInfo64 even when in a station. CPU usage is also minimal never going about 25% utlisation (I've also tried using Process Lasso to set the CPU core affinity to CCD1).

I can't think of anything else to try so any recommendations would be appreciated, I'm sure I've missed a load of stuff I've tried off this post, which is already way too long anyway.

TL : DR I have a 5090 and the performance is still not great any recommendations would be appreciated.
Totally feel your frustration here — I’ve got a similar setup (though on a 4080) and ran into the same performance inconsistencies with Virtual Desktop. Strangely, I also got worse results with VDXR vs SteamVR, even though many swear it’s better. For me, reducing supersampling slightly and capping the framerate manually helped reduce the spikes a bit. Not perfect, but more stable. That said, it’s wild how variable results can be with essentially high-end gear. Hope someone in here finds that magic setting we’re all missing.
 
Back
Top Bottom