Comparison of Frame Rates between VR and Flat Screen

I just tried VR in Odyssey Update 6 and it's not what I hoped (I know, surprise surprise), I'll just explain why I'm confused:

I have a 2k screen, so in flat screen mode I can normally manage the following resolution with ok (ish) frame rates:
- 2560 x 1440 = 3.7m pixels, but i use 1.25 SS, so that pushes it up to 4.6m pixels.
With this resolution I get around 90-100fps in space and 50 in stations (in the cockpit), less in settlements.

My Oculus Rift CV1 should require 1080 x 1200 x 2 resolution, but I use 1.25 SS there as well, so that's about 3.2m pixels total, less than the 4.6m in flat screen mode above.
I used the exact same graphics settings in VR as in Flat Screen, just selected 3D mode for VR.

So why do i get at least 25% less fps in VR? (It's still very stuttery in stations, and settlements will be a no-go). There is the extra load for the secondary projection of VR on the monitor, but that's fairly lo res.

Any comments from graphics experts welcome :)

(i7 4790k OC with GTX 1080 by the way)
 
You are very close to minimum spec if I remember correctly

I can imagin one of your displays requires more cpu than the other
 
You are very close to minimum spec if I remember correctly

I can imagin one of your displays requires more cpu than the other
Well, min specs are as below, and CPU load was around 50%.
Do you mean that in VR mode, the secondary low res projection of VR on the flat screen is causing a CPU bottleneck?
Maybe there are occasional spike in CPU load ..


  • OS: Windows 7/8/10 64-bit
  • Processor: Intel Core i7-3770K Quad Core CPU or better / AMD FX 4350 Quad Core CPU or better
  • Memory: 16 GB RAM
  • Graphics: Nvidia GTX 980 with 4GB or better
  • Network: Broadband Internet Connection
  • Hard Drive: 25 GB available space
 
Just did the same comparison between flat screen and VR in Horizons:
While sat in Fleet Carrier:
Flat Screen:
CPU 65% GPU 97% FPS 170 (so maybe approaching CPU bottleneck)
CPU 40% GPU 54% when limited to 90FPS (to simulate VR FPS of Rift, which is fixed at 90)

VR:
CPU 45% GPU 55% FPS 90 (limited)

Conclusions: the difference in CPU and GPU load between 90FPS flat screen and 90FPS VR is minimal: I would have hoped for the CPU/GPU load to be lower in VR as I'm pushing less pixels (3.2m instead of 4.6m), but it's still 'reasonable'.
So ... Odyssey seems to have accentuated the difference even more.

As an aside, it does show how my 'min spec' system easily manages VR in Horizons, and also that the CPU can manage 170FPS, so there may be scope to extend the life of the i7 4790k even further and get a GTX 3070 / AMD RX 6700xt or similar (not that I can spare the money / want to sleep on the sofa lol)
 
While not offering any useful input (sorry!) as to why...

It may be interesting to compare after update 7 (don't expect too much, but possibly something) to see if the navmesh fix does anything else useful.

I use a Rift S, but have a 2080 Super, but my own results are similar to your own, with ASW triggering in some busier scenes.
 
While not offering any useful input (sorry!) as to why...

It may be interesting to compare after update 7 (don't expect too much, but possibly something) to see if the navmesh fix does anything else useful.

I use a Rift S, but have a 2080 Super, but my own results are similar to your own, with ASW triggering in some busier scenes.
Ok, thanks. I’ll keep this thread as a reference and update results after update 7, just for science :)
 
Gonna be honest, no amount of tweaking will help performance for VR in Odyssey. It's still optimized like cold clam chowder; Update 7 claims to fix all of that so my best advice is wait until then.
 
I could imagine two things:

  • Pixel count accounts for fragment shader load, but before you get to the point of running them, the geometry needs to be rasterised, which has to be done twice for VR - once for each game camera (eye).
  • 1080*1200 is the physical resolution of the screens in the device, but because of the fish-eye-y manner in which its lenses blow up the imagery, games are typically made to render a larger bitmap than that, in order to produce at least one rendered pixel for every physical screen pixel at the centre of the view, which has the largest magnification. How much larger depends on how strong the fisheye effect of the lens is; For e.g. the HTC Vive, and the Valve Index, SteamVR uses x1.4 such... let's call it: "base supersampling". Less is needed for the Rift CV1, due to its narrower field of view - I don't know how much off the top of my head, and I can't say how much is used when running applications native to Oculus' runtime -- maybe you could get a report on this using Oculus' debug tool, or the Oculus Tray Tool...
 
Interesting thread. I've not tried pancake-VR since the very start, mainly because if sometimes I get 10FPS (GTX 1070) using a monitor, the VR experience isn't going to be as good.

Suddenly it becomes clear to me why they weren't going to support VR for Odyssey, and they probably knew why.
 
I could imagine two things:

  • Pixel count accounts for fragment shader load, but before you get to the point of running them, the geometry needs to be rasterised, which has to be done twice for VR - once for each game camera (eye).
  • 1080*1200 is the physical resolution of the screens in the device, but because of the fish-eye-y manner in which its lenses blow up the imagery, games are typically made to render a larger bitmap than that, in order to produce at least one rendered pixel for every physical screen pixel at the centre of the view, which has the largest magnification. How much larger depends on how strong the fisheye effect of the lens is; For e.g. the HTC Vive, and the Valve Index, SteamVR uses x1.4 such... let's call it: "base supersampling". Less is needed for the Rift CV1, due to its narrower field of view - I don't know how much off the top of my head, and I can't say how much is used when running applications native to Oculus' runtime -- maybe you could get a report on this using Oculus' debug tool, or the Oculus Tray Tool...
Thanks, that was interesting and informative.
 
As others have stated, rendering multiple view ports, from different perspectives, is more demanding than rendering one, even if the total pixel count is similar. Mirroring one eye to another screen is very low load, but doing the views of the two separate eyes in the first place is not.

CPU load of the render threads has almost nothing to do with number of pixels being drawn, total number of draw calls and/or the number of vertices/primitives/fragments is usually the overriding factor. Even the GPU side of things can be geometry limited, or be subject to severe cache contention, even if raw fill rate never becomes an issue.


Figuring out what approach the game is taking to VR rendering is the first step in deciding what to do to improve performance. This could be inferred, with some effort, from observing performance and resource utilization as various settings are manipulated. Ideally, you'd only be limited by draw calls and number of verticies, and could increase CPU speed and/or selectively reduce geometry load (turning down the view distance scale and terrain quality) to improve VR performance while sacrificing as little else as possible.

Also, looking at aggregate CPU load doesn't say much. If you want a clear picture of CPU load, per-core utilization, preferably with a short enough polling interval to catch relevant spikes, is required. Very near maximum GPU load is usually a strong contraindication to a CPU limitation, but there can be exceptions if GPU is not pegged at 99%+.
 
Back
Top Bottom