Adaptive super-sampling for VR?

Just wondered if it would be possible / practical to have an adaptive SS setting. When the GPU starts to bottlneck the SS could be decreased to prevent frame loss. For example. As ED has quite different demands in various scenarios; space, station interior, planet surfaces etc. Sometimes I change my SS setting if I plan to be just jumping through space as opposed to doing some SRV driving. As ED maxes out all current high end GPUs thoughts this might be useful!
 
It would be useful and many vr titles do it.
But I suspect these techniques wasn't fully established when FD got the core functionality on their engine together and I'm sure it's on a list somewhere for "when we redesign/rewrite the whole thing".
But only someone directly involved in coding the engine could actually say anything for how difficult implementing this would be.

Now their main engine isn't only used for ED. It's the same thing that is powering the themePark thing and likely the upcoming Jurassic themepark game.
So it might very well be implemented at some point.

And most consoles really love variable resolution.
It enables them to stamp "4k resolution support" on the boxes,
Even though the game itself will run as low as 480p during intense scenes. And that's only aiming for 30fps.
 
Yeah it's a technique with a huge amount of potential but ultimately I think that foveated rendering is going to be the magic bullet.

Ultra high levels of post-processing will be enabled on the focal area without any need for scaling as the performance overhead reduction of not having to post-process the entire texture will be colossal.
 
Most definitely.
But in essence this is just taking variable resolution to a next level.

I'm hoping the next gen of Hmd's, not just Fove will have eye tracking and exploit this.
Seems though what rumours are coming out of oculus etc the industry is more focused on bundling tracking into the hmd and going wireless.

But there really isn't a reason why a series of eye capturing cameras can't simply do their thing. Only thing it would actually need to report is a vector for where you at looking.
Not a high bandwidth multicamera feed like my eyex eyetracker.
They could also be at a much lower resolution than desktop eyetrackers since they couldn't be more optimally placed.
 
Last edited:
Hmm, the main benefit of foveated rendering is not increased detail. Sure you could probably throw 2.0x supersampling at the eyepoint attention area.

The main attraction with foveated rendering is being able to reduce detail in the peripheral areas (80-90% of the viewable FOV) and save that performance for the detail area rendering. Or maintaining a high or consistent frame rate.

But you'll still be limited by the screen resolution, and during rendering, by the visible art assets' texture resolution (and the detail in the 3D meshes too).
Maybe if its running 4Kx4K displays, and the performance cost is only slightly more than running today's displays - that's the idea behind foveated rendering.

It'll be interesting to see how they blend different LOD meshes for high-detail meshes between the eyepoint area and the periphery.
 
Last edited:
Back
Top Bottom