VR performance less than half non-VR performance - Does it make sense?

VR performance doesn't make sense to me, and maybe someone can explain it to me.

Without VR on my PC, I can get 144 fps or more with everything on ultra, at 2560 x 1440 (1440p) in Elite. In other words, my GTX 1070 totally crushes Elite, even in stations.

However, in VR, the game struggles to get 90 fps, even on medium or low settings, at the exact same resolution (Samsung Odyssey). Inside stations, FPS drops significantly, maybe even as low as 40 or 50 fps.

Theoretically, if my PC can do Ultra at 180 fps, I would expect it to be able to get roughly 90 fps in Ultra in VR since it has to render two viewports. Is that too naive?

In other words, VR FPS in Elite seems to be like 30% or less of what I'm seeing without VR, but I'd expect it to be 50% or more. Even though there's two viewports being rendered, the GPU is using the same scene for each frame, so I could imagine performance being more like 60% or more.

Some explanations I can think of:
  • CPU being more of a bottleneck, due to head tracking
  • Supersampling causing the effective resolution to be way higher (I need to double check performance at 1.0x supersampling in ED and Steam)
  • Some kind of VR overhead - Could be SteamVR, WMR, or something like that. I could test with Revive and see the Oculus renderer for ED makes a difference.
Has anyone else noticed a performance gap like this, or tried to do a systematic study of it?
 
Dual pass stereo rendering is the primary junction in the render pipeline where 2D and 3D paths diverge. It requires scanning the object tree twice as the scenes are actually completely different based on the camera pose. Single pass stereo using simultaneous multi projection from Nvidia removes this overhead by rendering in a single pass from a stacked texture but is not widely adopted yet.

Good document on SPS here https://docs.unity3d.com/Manual/SinglePassStereoRendering.html

As you have already mentioned for 3D you have to supply these dual eye textures to the VR Api Compositor. Each of these textures needs to be rasterized by the submitting application at a substantially higher than physical device output resolution (the scale is relative to the amount of distortion caused by the lens) to enable the Compositor to apply barrel distortion which counteracts the pincushion effect of the lenses.

That is the fundamental difference between the two pipelines and where the vast majority of the frametime differential lies.

There are many other variables that impact the "motion to photon latency" but they require an expert understanding of C++ engineering so you would need to read some hardcore technical documentation if you wanted to better understand the nuts and bolts of it.

"A developers perspective on immersive 3D graphics"

The original latency mitigation analysis by the don himself John Carmack
 
Last edited:
Back
Top Bottom