CPU load in VR?


Slight sidetrack but I'm curious, does anyone have any thoughts (or considered test results) re: setting SS to less than 1.0 (in this case 0.65) in order to then be able to increase HMD quality (in this case 2.0).

I used to have 0.75/1.5 but now I've got 1.0/1.25 because it struck me that surely it's madness to ask the game to deliberately render a lower-res image and then ask the Oculus software to take that image and upscale it again ("garbage in/garbage out" and all that) - especially since it finally has to be downscaled again at the end to fit the hardware resolution of the HMD.
 
Slight sidetrack but I'm curious, does anyone have any thoughts (or considered test results) re: setting SS to less than 1.0 (in this case 0.65) in order to then be able to increase HMD quality (in this case 2.0).

I used to have 0.75/1.5 but now I've got 1.0/1.25 because it struck me that surely it's madness to ask the game to deliberately render a lower-res image and then ask the Oculus software to take that image and upscale it again ("garbage in/garbage out" and all that) - especially since it finally has to be downscaled again at the end to fit the hardware resolution of the HMD.

There is no doubt in my mind that HMD Quality setting is the magic key for VR. AA is rubbish for VR. Turn it off. HMD-Q handles aliasing way better. Based on visual quality I would say put SS at 0.65 (for some reason 0.5 makes visuals very bad), and put as much as you can in HMD-Q.

It seems to me that many of these traditional tools for "enhancing" rendering work great in 2D, but create more problems than they solve in 3D/VR.
 
There is no doubt in my mind that HMD Quality setting is the magic key for VR. AA is rubbish for VR. Turn it off. HMD-Q handles aliasing way better. Based on visual quality I would say put SS at 0.65 (for some reason 0.5 makes visuals very bad), and put as much as you can in HMD-Q.

It seems to me that many of these traditional tools for "enhancing" rendering work great in 2D, but create more problems than they solve in 3D/VR.

But you see what I'm saying? I get that reducing SS below 1.0 in favour of increasing HMD-Q is a popular thing to do (and fair enough, if it works it works) but it doesn't make sense does it? I get that resizing an image up and then down again can help with aliasing but I'm struggling to understand how downscaling the original image before you start can do anything but reduce the final image quality (you can't put those lost pixels back again, only replace them with approximations). Incidentally, I'm kind of assuming that the game starts by rendering at the native resolution of the headset (in some sense) and that the 0.65 SS is therefore asking it to render an image 65% smaller than what's required to start with.
 
But you see what I'm saying? I get that reducing SS below 1.0 in favour of increasing HMD-Q is a popular thing to do (and fair enough, if it works it works) but it doesn't make sense does it? I get that resizing an image up and then down again can help with aliasing but I'm struggling to understand how downscaling the original image before you start can do anything but reduce the final image quality (you can't put those lost pixels back again, only replace them with approximations). Incidentally, I'm kind of assuming that the game starts by rendering at the native resolution of the headset (in some sense) and that the 0.65 SS is therefore asking it to render an image 65% smaller than what's required to start with.

I see what you're saying, and have absolutely no clue how it all works. I've just found that the HMD-Quality scaler actually works for VR (built for VR), and that the traditional 2D based tools are more or less rubbish for VR.
I suppose downscaling SS just leaves a bit of oomph for the HMD-Q. I would love to be able to keep SS at 1, but my current rig simply can't pull it. :( and higher HMD-Q works better than higher SS for VR (in my humble opinion).
 
i've fired up the monitoring tool within the oculus sdk while in res sites, also using a 6600 skylake, I believe I'm at 4.4... At any rate, can definitely see throttling at res sites, by all accounts the 6700k actually does make a significant difference in vr. There's a recent reddit post saying as much as well.
 
The i5 does seem to struggle a lot more than an i7 from other posts I've seen.

Ok so I did a fairly detailed test on this last weekend...

VR ultra setting but SS1.0 HMD 1.25, Ambient occlusion low, FXAA and high shadows (I think)

i5 4670k @4.3Ghz
16Gb Corsair DDR3 @ 1600Mhz
EVGA GTX 1080 FTW
Asus Z97-Pro Gamer

ASW switched off

Docked in station hanger
ED CPU: 60 to ~84%
CPU overall 72% - 100%
Ram 6.1/15.9

FPS Mostly 87- 90 very rare drops to 54

Fuel scooping
ED CPU: ~51%
Oculus service CPU: ~8 %
CPU overall ~63%
Ram 6.2/15.9

FPS 90

High RES
ED CPU: ~65%
Oculus service CPU: ~8 %
CPU overall ~77%
Ram 6.3/15.9

FPS 90

Sat on outpost landing pad
ED CPU: ~51%
Oculus service CPU: ~8 %
CPU overall ~64%
FPS 90

Morgan Depot Landing Pad 5
CPU up to 93%
Ram 6.8/15.9
FPS 61 – 90 (normally ~80)
FPS very variable especially low when looking at Starport towers.

Conflict Zone High
(Difficult to get much because I kept being attacked)
CPU Pegged at 100% ED 90%+
FPS 90 much of the time but also had moments hovering ~80%

VR MARK

Orange room: 9071
Avg FPS: 197.5

Blue room: 2400
Avg FPS: 52.31

3DMARK Fire Stike (for high performance PC’s)
Overall score : 15955
Graphics Score: 23250
Physics score: 8603
Combined score: 7704
Graphics test 1 FPS: 112.86
Graphics test 2 FPS: 91.54
 
Hmm fx8350 was sitting at 50% across 8 cores but only r390 so maybe that's acting as a brake. Can task manager be trusted here?

Run at 1.2 ss set in steam beta ss slider. Vr medium and 1.25 hmd quality vr medium.
Ryzen and Vega for me at the end of the year. Need them 8 cores 8 threads!
 
I've related this story before, since it dates before I was using VR and before i built myself a new rig.

Old rig, no VR: Core2Quad 6600, GTX650 Ti Boost. Elite prior to Horizons ran max settings (1280x1060), and I got 30fps in stations and 70+ in space. Very playable. Horizons came, and egads, performance took a hit in stations and during planet orbits/landing, had to dial back performance settings quite a bit. CPU load before and after horizons was typically 60-75% averaging across all four cpus.

New rig: Core i5 6600, 1060GTX. Run ultra with 1.25 super sampling, same resolution, get 45+ fps in stations and 300 in space. CPU load is the same as before.

New rig with VR: not sure about the fps on the DK2, but seems very good. CPU load is similar (60-75% most of the time) but I see a fair number of high CPU spikes now where it shoots to 90-100% for 1-3 seconds, usually during hyperspace transitions.

Only real impact I've seen in VR: opening the gal map or the station store front the first time on starting the game takes WAY longer than in 2D. Later openings are quick. Transitions to / from supercruise, warp, or planetary glide often take longer--but I've also seen a lot of complaints about server issues since I got the VR kit a few weeks ago so that may be completely unrelated to VR.
 
The big difference that VR makes is that it imposes the need for the entire game to run at 90FPS, with a small amount of extra overhead for the CPU.

With a monitor, you have a much broader range of framerates that are tolerable - even down to 30 is still playable. In VR, anything less than 45 is totally unacceptable, and anything less than 90 is highly un-desirable.
 
Last edited:
I have an i7 7700K, reverted back to stock 4.2 due to my heatsink waiting to be replaced. I have a 1080, and my RAM is at clocked at 3200.

I sun SteamVR SS at 1.5 and these are the ingame settings:
dEYMSRb.png


CPU will hit 90% +/- in bases.
 
Last edited:
What are you all using to measure FPS and CPU load reliably? I have an i7-6850K at 3.6GHz with a 970 GTX, and I'm curious as how it compares...

I have ED set at similar settings than the OP and have smooth view in the station (sitting outside on the pad), but completely unaware of FPS.
 
Last edited:
What are you all using to measure FPS and CPU load reliably? I have an i7-6850K at 3.6GHz with a 970 GTX, and I'm curious as how it compares...

I have ED set at similar settings than the OP and have smooth view in the station (sitting outside on the pad), but completely unaware of FPS.

Ctrl + f for fps. HWiNFO64 for monitoring CPU.
 
On another note I saw a VR guru on YouTube who advised in game SS to .65 and HMD quality to 2 it worked like magic for my setup.
 
You can display the framerate on the monitor (but not the HMD) by pressing "Ctrl + F."

This can be a little misleading, though, since it appears to report a rolling average over the last second or so, and will thus report numbers between 45 and 90, which never actually happens. It's either 90, 45, or less than 45 (in which case you should get a faster PC or stop playing VR.
 
You can display the framerate on the monitor (but not the HMD) by pressing "Ctrl + F."

This can be a little misleading, though, since it appears to report a rolling average over the last second or so, and will thus report numbers between 45 and 90, which never actually happens. It's either 90, 45, or less than 45 (in which case you should get a faster PC or stop playing VR.

The reason you only see 90 or 45(or less) is because ASW is switched on. If the Oculus software detects you can't maintain 90 it switches to 45 until it decides you can get 90 again. Whilst at 45 it adds a predicted frame in the middle which is why it looks just as smooth. If you switch off ASW Ctrl+numpad ( 1 or 4 one switches it off the other on, I can never remember which way around) you will see all sorts of frame rates. The CPU use drops dramatically at 45 because there aren't as many draw calls to the GPU.
 
What are you all using to measure FPS and CPU load reliably? I have an i7-6850K at 3.6GHz with a 970 GTX, and I'm curious as how it compares...

I have ED set at similar settings than the OP and have smooth view in the station (sitting outside on the pad), but completely unaware of FPS.

I recommend Process Explorer. https://technet.microsoft.com/en-us/sysinternals/processexplorer.aspx
HWiNFO64 is good too.

In my case both programs show less CPU load than the Windows Taskmanager.

- - - Updated - - -

I have an i7 7700K, reverted back to stock 4.2 due to my heatsink waiting to be replaced. I have a 1080, and my RAM is at clocked at 3200.

I sun SteamVR SS at 1.5 and these are the ingame settings:
http://i.imgur.com/dEYMSRb.png

CPU will hit 90% +/- in bases.

thanks for reporting. I expected the 7700k to perform better. I was thinking to swap my 6600k to a 7700k in the future but this makes me reconsider.

- - - Updated - - -

i've fired up the monitoring tool within the oculus sdk while in res sites, also using a 6600 skylake, I believe I'm at 4.4... At any rate, can definitely see throttling at res sites, by all accounts the 6700k actually does make a significant difference in vr. There's a recent reddit post saying as much as well.

this on the other hand states a much better performance for i7 over i5
 
I recommend Process Explorer. https://technet.microsoft.com/en-us/sysinternals/processexplorer.aspx
HWiNFO64 is good too.

In my case both programs show less CPU load than the Windows Taskmanager.

- - - Updated - - -



thanks for reporting. I expected the 7700k to perform better. I was thinking to swap my 6600k to a 7700k in the future but this makes me reconsider.
- - - Updated - - -



this on the other hand states a much better performance for i7 over i5


I have to update my post. Last night/today I nuked my system and started reinstalling windows10, new drivers for everything and so on. Something I do at least yearly, and have wanted to do since upgrading my CPU/MB/RAM. I also haven't done this since using the upgrade option from Windows7 to Windows10. Also, I did a fresh ED install.

Anyway, I juuust had time to briefly run it before coming to work. I believe my CPU load was significantly lower than previously reported. After work I'm going to run a few tests but my initial quick play had my CPU not rising above approximately 65% utilization.

I need to give it a proper test, check my settings and all that but I believe my system was generally running like crud.
 
Last edited:
...this on the other hand states a much better performance for i7 over i5

Unless we're all using the same primary graphics settings (ss: 0.65, hmd-q: 2.0, AA: off, draw distance: max) and monitoring tools (HWiNFO64), I can't see how we can compare CPU loads.

If we could all agree to install HWiNFO64 and use the initial settings stated in my OP and make a station to station flight, then we could actually compare notes.

Anywho, as my average loads show around 60% I guess all is well. Those spikes do worry me though.
 
Agreed. I posted my settings for that reason, or at least so whatever results I gave can be weighed against my settings. I'm happy to duplicate settings and run some comparisons, if it helps. I use HWMonitor [by CPUID] for info, using that instead of HWiNFO64 shouldn't make any difference, though. Beyond just SS/HMD settings, things like shadows and effects like that are also a pretty big deal to some people. Personally I wouldn’t remove them just to up my SteamVR SS to 2.0, for example, where others are happy to. Apart from FPS and CPU load, we also have to consider visual fidelity and what the CMDR thinks is acceptable to lose/trade off.

As I just posted, too, the general state of your system(software) can have an impact. With the same settings I posted before, and the same physical system, I've dropped about 30% just by freshening up. [albiet this is just an initial result]

Unless we're all using the same primary graphics settings (ss: 0.65, hmd-q: 2.0, AA: off, draw distance: max) and monitoring tools (HWiNFO64), I can't see how we can compare CPU loads.

If we could all agree to install HWiNFO64 and use the initial settings stated in my OP and make a station to station flight, then we could actually compare notes.

Anywho, as my average loads show around 60% I guess all is well. Those spikes do worry me though.
 
Last edited:
Back
Top Bottom