So... Whilst the game is perfectly playable in VR today (EDIT: Performance-wise ...including on-foot with the vanity camera), IMHO even doubling performance is not going to come anywhere near making proper use of even current headsets -- It takes rendering something like 30 Pixels Per Degree of field of view, for Odyssey planets to begin to look like terrain layers remotely fit together and don't look too splotchy, regardless of the resolution of one's monitor or headset, and that higher rendered resolution looks much better on them, even if they themselves are nowhere near these 30PPD...
I too constantly have that experience with a vista that is stunning in VR coming out flat and unremarkable on a screenshot -- it is the exact same view, framed by where I look, with the same lighting and everything -- it's just lost all depth and "life"...
(Does anybody else have the impression, by the way, that update 12 changed some detail texture bias for the worse? -There has long been this sharp line between where textures are more detailed within a circle around you, and everything much blander beyond it, but it feels worse than ever -- like the farther terrain drops down not just
one MIP (...and maybe LOD, too), but two or three...)
I am also pretty sure a not insignificant contributor to the nausea many players feel, especially when driving the SRV, is low frame rate - even when they have synthetic frames filling in the blanks; Those are simply not good enough, and often work against their purpose, as they warp your view, even if you've learned to overlook that, or even honestly can't tell the difference. It would be nice if we could get full rate true frame for headsets with 120Hz and higher refresh rates, and to have
both that
and the higher render resolution... Yeah... Bring the 400090Ti... :/
I'm of the opinion game graphics needs to retire rasterisation entirely, and switch to full raytracing as soon as performance levels allow - no hybrid half-measures.
Raytracing
is much, much more computation heavy, but does also bring with it a number of possibilites where performance is concerned...
Given it renders per pixel, it can be parallellised to a great degree, and if the competetive market wasn't what it is, I would really have liked to see an industry standard for distributed rendering -- doing multi-GPU properly this time -- which would let you have e.g. a "main" graphics card, which has all the outputs, and rasterisation cores for legacy titles, and stuff like that - maybe an on-board SSD for caching, and then as many supporting render farm cards as you can afford and fit (possibly via a dedicated network bus, rather than over PCI), which have only raytracing cores, and whose scene data caches can be written to in parallel, and which are optimised for performance per watt, instead of trying to squeeze that last tiny drop of juice out of them, at a ridiculous cost in waste heat.
These cards would be assigned work by the managing software, based on their individual capabilities, and they could be any mix of manufacturers and hardware generations.
Yeah... We're never going to see that sort of co-operation in the GPU market...
The same per-pixel affordance that would enable the above, could also allow a lot of optimisations to be done per-pixel.
A problem with current realtime 3D graphics is that the viewplane is always a rectangle - a single flat plane, and that becomes more and more of a problem the wider you make the field of view, due to simple matters of geometry - projecting an object that takes up one degree of your field of view is going to cover a lot more pixels out in the far periphery, than it does right in the centre of screen, until it (by the tangent, not linearly) reaches infinity at 90 degrees from straight ahead.
This could be eliminated with some forethought by the renderer programmer, who could cast rays for pixels by the degree, rather than by equidistant sample points on a flat viewplane - in effect making their viewplane logarithmic, or cylindrical, or spherical, or parabolic, or any other shape. This would eliminate a ton of unnecessary work (and buffer VRAM waste along with it), that is currently being done -- more such waste the wider the FOV is. Triple-monitor players could probably see a noticeable performance benefit from this, and not only the VR heads.
Rays could also be cast in passes; Each individual sample adding detail to the final image, all the way up until deadline for a given frame rate target, at which point rendering could be terminated at will, and the frame be constructed out of every bit that has been traced so far, in time for its targetted screen refresh. This would make the graphics quality inherently scale dynamically, and fine-grained, to requested frame rate, scene complexity, and hardware performance.
...and then you've got foveation. -Given the human eye only see sharply in a tiny part of its field of view, which projects on a small patch on the retina where cone type photoreceptor distribution is markedly denser than elsewhere. If you have sufficiently fast and accurate eyetracking (...which is expected from most upcoming HMDs), you could significantly reduce render resolution for everything outside the small portion of the frame that the eye can see in high detail, and it wouldn't bother the user greatly.
Even Tobii-equipped monitor players could benefit from this, if their screens are large enough, and they sit close enough to them.
(Large swathes of screen area that has on previous frames been e.g. all black, could possibly be considered for rendering a lower resolution, too)
There are a pair of additional factors that have to do with the lenses in (current) VR headsets. The sample density could also be tuned to how they distort the image, optimising further -- they tend to compress the image toward their centres (so called "pincushion" distortion), which is why games are typically rendering 1.4 times larger frames than what is native for the HMD screens - you render at the higher effective observed ocular resolution that the lenses give you right down their axes, to make use of it, but unfortunately with rasterisation you must render the same viewplane resolution for the entire frame, including the periphery, which is stretched out by the lenses (...although it
is possible to skip pixels in the fragment shaders, and interpolate them instead) -- this is more wasted effort, which can be eliminated by redistributing rendering density across the field of view - in this case matching the distortion profile of the lens.
Finally... This lens distortion... It could be accounted for from the get-go, right when casting each ray from the camera, so that the frame comes out of the rendering process already with the barrel distortion that counters the opposite distortion of the lens, removing the need to apply this distortion as a post effect.