Oculus CPU Render Time

Hi Folks!

I'm using the Oculus Debug tool to try to benchmark ED on my CV1. For testing I have an i7-2600K (@ various frequencies), and a 980 Ti OC model.

Using the debug tool I see both GPU render time and CPU render time under performance metrics. I've been testing the CPU @ 2.0, 3.0, and 4.5 GHz.

It appears that the CPU render time doesn't cause the framerate to drop at all unless the CPU render time is > 11.1ms (which equates with 90 fps). This only occurs at 2.0 GHz and does not occur even at a relatively low 3.0 GHz.

My question is - Are the CPU and GPU rendering in parallel? or do you really want both CPU+GPU time to be less than 11.1ms for the optimal VR experience?

Does it go from "Best"= CPU+GPU < 11ms, "Good" = CPU and GPU are each <11 ms, "Bad"= CPU or GPU >11ms?

(For reference, even with VR Low, and no SS, the 980Ti is still "GPU rendering right around 10-11 ms.. Maybe that's VSync?)
 
Hi Folks!

I'm using the Oculus Debug tool to try to benchmark ED on my CV1. For testing I have an i7-2600K (@ various frequencies), and a 980 Ti OC model.

Using the debug tool I see both GPU render time and CPU render time under performance metrics. I've been testing the CPU @ 2.0, 3.0, and 4.5 GHz.

It appears that the CPU render time doesn't cause the framerate to drop at all unless the CPU render time is > 11.1ms (which equates with 90 fps). This only occurs at 2.0 GHz and does not occur even at a relatively low 3.0 GHz.

My question is - Are the CPU and GPU rendering in parallel? or do you really want both CPU+GPU time to be less than 11.1ms for the optimal VR experience?

Does it go from "Best"= CPU+GPU < 11ms, "Good" = CPU and GPU are each <11 ms, "Bad"= CPU or GPU >11ms?

(For reference, even with VR Low, and no SS, the 980Ti is still "GPU rendering right around 10-11 ms.. Maybe that's VSync?)

Essentially; 'generally', 'yes', and 'pretty much'.

The 3D card receives 3D geometry data from the CPU. The CPU works out the relative positions of all models, their alignment etc and passes that to the 3D card for rendering i.e. filling in all the polygons with textures, lighting etc. Asynchronous Time Warp is running in parallel and its frames get substituted in if a rendered frame is not ready in time (each 11ms refresh tick).

Read this on Oculus' site; it explains a lot about how things are rendered in time and how ATW is there to save the day and your lunch when needed.
https://developer.oculus.com/blog/asynchronous-timewarp-examined/


If the CPU can't process the (next) frame in under the 11ms limit, then the 3D card has to wait (dropping to 45fps immediately; this is when Asynch Time Warp cuts in and twists the previous frame so the 3D card has something to show you when the refresh tick comes past). If the 3D card can't fully render in the 11ms time, then yes, it too fails over to ATW (and whatever did get render even if it was on the very last pixel, all goes in the bin as new geometry comes in from the cpu.

If both cpu and 3D card are slower than 11ms, you get all sorts of timing mismatches, lots thrown out, and ATW trying to fake 2 or even 3 frames between actual rendered frames and you get 'judder' as the ATW twisted frames get really out of shape compared to what each frame should look like. One ATW best guess looks okay, but a best guess based on your last best guess (or worse, two) gets ugly real quick. Then you're into motion-sickness judder territory! :eek::x (we desperately need a 'vrsick' emoticon.

The Rift/Vive are fixed refresh rate (90Hz) so in theory anything over 180fps (90 for each eye) is wasted and can be turned to additional detail settings.

Have a play with the different debug tool HUD settings - I think you can get different cpu and 3D rendering timers in there (at work so can't recall or check).

Post screenshots if you can :)
 
Last edited:
Back
Top Bottom