Hi Folks!
I'm using the Oculus Debug tool to try to benchmark ED on my CV1. For testing I have an i7-2600K (@ various frequencies), and a 980 Ti OC model.
Using the debug tool I see both GPU render time and CPU render time under performance metrics. I've been testing the CPU @ 2.0, 3.0, and 4.5 GHz.
It appears that the CPU render time doesn't cause the framerate to drop at all unless the CPU render time is > 11.1ms (which equates with 90 fps). This only occurs at 2.0 GHz and does not occur even at a relatively low 3.0 GHz.
My question is - Are the CPU and GPU rendering in parallel? or do you really want both CPU+GPU time to be less than 11.1ms for the optimal VR experience?
Does it go from "Best"= CPU+GPU < 11ms, "Good" = CPU and GPU are each <11 ms, "Bad"= CPU or GPU >11ms?
(For reference, even with VR Low, and no SS, the 980Ti is still "GPU rendering right around 10-11 ms.. Maybe that's VSync?)
I'm using the Oculus Debug tool to try to benchmark ED on my CV1. For testing I have an i7-2600K (@ various frequencies), and a 980 Ti OC model.
Using the debug tool I see both GPU render time and CPU render time under performance metrics. I've been testing the CPU @ 2.0, 3.0, and 4.5 GHz.
It appears that the CPU render time doesn't cause the framerate to drop at all unless the CPU render time is > 11.1ms (which equates with 90 fps). This only occurs at 2.0 GHz and does not occur even at a relatively low 3.0 GHz.
My question is - Are the CPU and GPU rendering in parallel? or do you really want both CPU+GPU time to be less than 11.1ms for the optimal VR experience?
Does it go from "Best"= CPU+GPU < 11ms, "Good" = CPU and GPU are each <11 ms, "Bad"= CPU or GPU >11ms?
(For reference, even with VR Low, and no SS, the 980Ti is still "GPU rendering right around 10-11 ms.. Maybe that's VSync?)