VR works in camera mode on foot, so…

I don’t understand why, when I enter camera mode on foot, it reverts from a 2d screen to true VR.
I can run around in VR, I can shoot in VR.
But to interact with anything else I have to go flat screen.
Can someone (FDev) please explain why this is such a big leap that space legs VR is not supported? I really don’t understand this, when the camera function works so well in VR.
 
It's a political decision. Some day, a dev may implement a secret switch to turn the entire thing off, for sake of testing or else, but certainly FD will not be to blame.

Could be their engine works better than thought.

O7,
🙃
 

I'm as baffled by this as you are and started a near identical thread a few weeks ago headed:​

"We know full VR is already there for legs in EDO so why not let us use it and label it as an indefinite beta?"​

No response from the Devs to my effort but I hope yours elicits some reaction.
 
all you can do is ask the community managers nicely....

There are three possibilities as I see it:
  • Frontier as an organisation simply refuses to engage or acknowledge the topic of VR any more, and just want the problem to go away with the CMs being directed as such. There maybe good reasons but they don't want to say anything further
  • The community managers themselves have decided to exclude or annex the VR player base from 'the Odyssey community'
  • The CMs know its not being worked on, there is no one they can go to internally for Q&A and therefore nothing they can communicate and are hoping the problem goes away
 
all you can do is ask the community managers nicely....

There are three possibilities as I see it:
  • Frontier as an organisation simply refuses to engage or acknowledge the topic of VR any more, and just want the problem to go away with the CMs being directed as such. There maybe good reasons but they don't want to say anything further
  • The community managers themselves have decided to exclude or annex the VR player base from 'the Odyssey community'
  • The CMs know its not being worked on, there is no one they can go to internally for Q&A and therefore nothing they can communicate and are hoping the problem goes away
Instead of asking the CM’s nicely (and I’m a great advocate for asking like an adult and not being one those sort (you know the type!) of VR users) you could always use the Forum search function and check for VR posts by them. Then, you might read things like this:
https://forums.frontier.co.uk/threads/returning-to-edo-but-please-frontier.593061/post-9610695
👍
 
I don’t understand why, when I enter camera mode on foot, it reverts from a 2d screen to true VR.
I can run around in VR, I can shoot in VR.
But to interact with anything else I have to go flat screen.
Can someone (FDev) please explain why this is such a big leap that space legs VR is not supported? I really don’t understand this, when the camera function works so well in VR.
Now, that's interesting. I haven't played in a while so I never knew that it reverted back to VR in camera mode...

For a long time now, I've believed that Frontier has always operated in "reactive" mode rather than a "proactive" one. I.e. Fix the problem at hand, forget the long-term implications to the game. This, to me, is a pretty clear example of that.

I suspect the answer to your question is that Frontier is so overloaded with priorities that they don't have time to even look at the long-term implications of their choices.

My two cents, anyway.
 
Considering the performance issues flat needs to be an option at the very least, perhaps the default. And in that case there's little point in coding up any VR mechanics when there aren't many machines it would be playable on.
 
Following the link provided by Arioch, I think our answer is here:
We want to get to you all, but we can't just flip flop the team from one priority to the other, that's not how development works efficiently at all and would burn the entire department out - we need to focus the work in the most optimal order for ALL OF YOU.

Problem is, this, in my view, appears to have been a problem for some time. To me, it felt like it began sometime around Horizons and is why Beyond eventually became a thing. I believe it was triggered when Frontier began the development of other games. Developers got stretched thin forcing them to prioritize many things.

Again, my humble opinion...
 
Yeah that doesn’t really tell us anything, and it certainly doesn’t answer the question I asked
Can someone (FDev) please explain why this is such a big leap that space legs VR is not supported? I really don’t understand this, when the camera function works so well in VR.
I’m not FDev and I’m certainly not a programmer, but I can guess getting on-foot VR to work is not just a simple case of slapping the UI on the vanity cam and voilà! 😁

Any
code work takes time & people - and we’ve been told the priority is solving the issues for the majority of the players. We VR users are a small part of the active user base (I’d guess somewhere in single-digit percentage) and those of us who’d like the “simple” solution of the vanity-cam-with-UI are most likely an even smaller part.

One of the first things we were told (over a year ago) was:
…we do strongly believe that VR should only be enabled for on foot gameplay when we have an experience that truly matches the same quality bar that we set for cockpits.
…and further messages from the CM’s show that current bugs and issues are being worked on or discussed. With the last post from Sally and the comments regarding the FSS VR issue, I think it’s clear that we VR users had better not hold our breath for anything other than bug fixes until the Odyssey console releases are out of the door.
 
The problem is, Arioch, that Odyssey already has VR in it and the fact that you can switch back to it in camera mode proves that.

The question becomes: why couldn't they implement it properly in this version of the game? What "obstacle" prevented them from adding VR to game that already has it? This question becomes even more curious when you consider that VR can be hacked into old games like Jedi Academy and even modern games that don't have it, like Kingdom Come: Deliverance, for starters.
 
The problem is, Arioch, that Odyssey already has VR in it and the fact that you can switch back to it in camera mode proves that.

The question becomes: why couldn't they implement it properly in this version of the game? What "obstacle" prevented them from adding VR to game that already has it? This question becomes even more curious when you consider that VR can be hacked into old games like Jedi Academy and even modern games that don't have it, like Kingdom Come: Deliverance, for starters.
Aye, it’s great to be able to step into some old games - but like I said:
I’m not FDev and I’m certainly not a programmer, but I can guess getting on-foot VR to work is not just a simple case of slapping the UI on the vanity cam and voilà! 😁
That the vanity cam is in full VR is a good sign - but as happens with other jobs, the last 20% of the work can take 80% of the time 😁

People are free to keep asking, but I think we’ve already had the answer: FDev want to do quality implementation for on-foot VR and the flatscreen game takes priority.

I think the “obstacle” was time & cost - there’s simply just not enough of us to justify anything beyond the virtual flatscreen we got.
 
So... Or, actually; Let me begin this post with reiterating my pledge from back when it was found out that there would be no VR in Odyssey (before this stance from FDev was either partially backpedalled, or clarified, to its current state (i.e. retained VR in vehicles, and a virtual cinema screen whilst on foot)), that I for one (very much recognising the limited size of the VR audience, and of resources), -would be perfectly happy to pay a bit of a "VR tax", to help make up for the argueably disproportionate costs (monetary and other) involved (this could include assignment of a dedicated engineer - maybe full-time).

...so there are a few things to say about performance and VR...

I am currently playing the game on a Valve Index (ca. 15 pixels per degree of field of view), with the SteamVR resolution slider cranked all the way up to 500%, which slightly more than doubles the size of the frames the game renders (x2.236 both height and width), so say rendering about 30ppd, instead of the 15ppd of the device, in the centre of the per-eye view (down the axis of the lens).
(For headsets with other resolutions and per-eye fields of view, you'd obviously use other supersampling factors, to reach the same angular render resolution.)

Whilst this leaves me with single digit frame rates whilst on-foot (on a 1080Ti, but be aware the current high end is really only around twice as powerful, and keep in mind that double both X and Y resolution is four times the fragment shader work load), it has quite a dramatic effect on the appearence of planets. -It is not just that your rendering at higher resolution constitutes antialiasing, but it will also make the game generate higher detail terrain (for both landables and spheremapped planets), and pull in higher LODs and mipmaps for premade models, to match the doubled render resolution. All this extra detail and spatial fidelity then becomes incorporated into the values of the pixels that make up the lower resolution output frames for the native screen resolution of your device. This view becomes more detailed, and "stable", and "solid-feeling"; Sub-pixel panning comes across less like shimmering, when a feature in view transitions between neighbouring pixels.

Obviously (as explained) this is not something that can be reproduced using upscaling -- not even "smart" upscaling like DLSS or FSR -- only actual detail can be actual detail.

(I'm kind of wondering whether there could be room for a new mipmap format, and associated filtering method, that could somehow encode sub-pixel shifts, to make the smaller mipmap feel more "alive" so to speak - perhaps not entirely unlike the Anisotropic Filtering situation...)


Sooooo (here's hoping I am about to finally get to the point)...

There are a number of performance-promoting techniques that are particularly useful with VR - more and more, the higher the resolution and FOV of the device in use; Actually, many of them would benefit players using multiple monitor setups as well - there is no way the stretched views going to to the side monitors need to be rendered at their extreme oblique resolutions.

First of all, single- versus multiport rendering: Due to the way in which projections for views of the game world are done in games, basic trigonometry gives that the farther away from the centre of the view you go, the more pixels you render per degree of field of view, and this goes up by the tan - not linearally; At 180°, the viewplane becomes infinitely wide.
To render a viewport wider than that 180°, we can not use a single flat viewplane -- if we can not make it curved, we can at least split it up into smaller, still flat and rectangular, segments -- "fold it", as it were -- the more segments, the less rendering work wasted on things stretching toward the far edges -- there are ways to do this both in the form of completely separate viewports for each segment, which takes setup work for each, and in ways that does it all in one pass, but the latter may lead you into vendor proprietary APIs...

Then we have foveation and lens-matching. Let's begin with the latter: The lenses used in headsets, so far, typically produce a "fish-eye" kind of distortion, which magnifies things in the middle of the view through the lens, and compresses things toward the rim. Combine this with the stretching toward the edges of of the game camera projection mentioned in the previous section, as well as the field curvature of many headset lenses leading to things blurring toward the edges, and you'll notice that an inordinate amount of GPU work is being wasted on the parts of the view where they make the least good.

Now add foveation on top of the above... The distribution of cone type photoreceptors on the human retina is not uniform - for the most part they are fairly sparse, and the lion's share of them are concentrated in a tiny area called the "fovea", which covers a narrow cone of vision right where you are directing your gaze, and full resolution is only strictly necessary for anything within that cone - everything outside could be rendered at lower resolution, to relatively little annoyance (image fidelity does remain an issue).
You could simply apply this in a fixed place, aligned with the center of the lens, but best is if the foveation area could move around with your wandering gaze -- many up-coming HMDs will have eyetracking as a standard feature.

Foveation could, as it happens, even benefit players on monitor, who have a Tobii eyetracker, or equivalent, provided the size of said monitor, and the distance between it and the viewer, is such that it fills enough of the viewer's FOV.

These things, and some others, could very well leave us being able to play the game with massive supersampling for the fovea, and still get decent framerates, thanks to the elimination of wasteful rendering.


So... rendering selectively at lower resolution... How do one do that? There are a few ways. -If one had multiple viewports, each could could be its own arbitrary resolution, of course, but how do you vary resolution within a view plane?
-Well, you could use a shader mask, to make your fragment shaders skip given pixels, and then use a post shader to interpolate values for those skipped pixels from their shaded neighbours. NVidia offers something ready-made that does this, in a few discrete steps, dubbed "Variable Rate Shading", but that is another of their proprietary functions, that are made to work only on their own GPUs, and only certain generations of their own GPUs to boot.
Unfortunately, a wide FOV viewplane which is workload-reduced in such a manner still uses up its full frame buffers - you save no video RAM...


And at this point: This: These sorts of shader-level solutions are really things that should be kept in mind from the conceptualisation stage of all shaders - working it out later means you'll have to tear up all your work, to retrofit low level functionality.

...and this does not only go for such things as render masking. If you have several viewplanes pointing in different directions, many shaders need to take this into consideration. -Already today, Ambient Occlusion does not work properly in the game, for any headset that does not have perfectly symmetrical per-eye FOV -- the lighting becomes different for the left and right eye views, and e.g. the night-vision effect rolls along with your head. There are several things like this.
I feel pretty certain the visor reflections to the sides, when on foot, as well as the condensation that forms inside, are likewise done in buffer space, these two possibly sharing a single shader (I'd say just turn this stuff off -- I'm not convinced we should have such reflections/refractions of the outside, on the inside of a helmet to begin with, and the condensation might prove uncomfortable even if mapped to actual visor geometry, thanks to current headsets being fixed focal depth (usually 1-2m away), and the visor being just an inch in front of you, but still sharing that 1-2m focal distance with the rest of the view -- that said: Another thing that should be prepared for, is future headsets which can reproduce depth of field, and other things like HDR and larger colour gamut :9).

...and finally: Any VR work from now on should interface with OpenXR. Older proprietary APIs from Oculus, Valve, Microsoft, and a few smaller players, are deprecated, in favour of the new common industry standard.
 
Last edited:
Top Bottom