Vive Pre - I've got to be doing something wrong. Anyone else?

I'm quite sure FD will improve the IQ. It is slightly disappointing that ED is a bit behind on the Vive. But I'm still looking forward to ED issues and all. I hope when FD comes up with some fixes everyone raves how much better it is. Its disheartening to hear how disappointed people are atm.
 
Last edited:
...

Update: this is what I see with one eye if I press the headset really hard against my face compressing the foam: http://i.imgur.com/IuJJfKi.png

I haven't received my Vive yet, so I can't really know anything - maybe that circle is precisely what the HMD was designed to give us, but I still feel the urge to go: "Dremel time!". :9

(Only half joking. If throwing the warranty on my new, very expensive toy to the wind, is what it takes to make it fit me, I will put the hacksaw in it -- heck; It could mean I'd get rid of some of its not insignificant weight, as a bonus. :p)
 
Last edited:
Sorry, this doesn't match my test just now. I can see a lot more of the picture than the red circle while wearing the headset normally, and can see most of the intended FOV when squeezing it against my face to compress the foam. I suspect this depends on face shape and other factors, but it's not generally true that it wastes so many rendered pixels.

So you'd say you can see an area between the orange circle of ltx' estimate and the black border at the closest possible proximity to the Vive lens?

EDIT: Just found your statement on reddit that you're at the orange circle at minimum eye relief. So why all the extra pixels around the sides?. Is this two of you with extremely deeply set eyes? Or are those pixels in the undistorted render squashed to almost nothing by applying the barrel distortion for the lenses?

Update: this is what I see with one eye if I press the headset really hard against my face compressing the foam: http://i.imgur.com/IuJJfKi.png

Have a look at this GDC 2015 talk on VR Rendering by Alex Vlachos and skip ahead (or watch it all, it's all good) to the bookmarked 'Stencil Mesh' section. At 58 minutes in, he starts to demonstrate the area of the back buffer that is stencilled out to prevent drawing to pixels on the panel that cannot be seen due to the optics. He says that the stencilled out pixels can be barely seen by "mashing your eye to the Vive lens". At 59:05, he shows the undistorted render target (as given in ltx' image) and masks out the unseeable pixels. These correspond to the black areas in http://i.imgur.com/IuJJfKi.png. Therefore, the rest of the pixels should be visible at minimum eye relief.

My first question is, what accounts the difference between the pixels inside our visibility estimates and the rest of the pixels drawn to the undistorted render target? If our estimates are correct, and Vlachos' assertion about what can be seen at minimum eye relief is correct, why render the areas outside the orange circle/maximum estimate at all? Would the consequence of overdraw be a reduced field of view and lower resolution available to use in the the visible area?

Secondly, skip back to '4xMSAA Minimum Quality'. He says this (or 8x) is the gold standard for AA for VR. Cobra is a deferred renderer, and from my minimal understanding of graphics programming, MSAA isn't doable in deferred rendering*. Can not using this "gold standard" for VR AA be the reason for the 'low res ED on Vive' phenomenon?

*There are a few discussions if you google that suggest it is possible, but I'll assume that Cobra follows the orthodoxy.
 
Last edited:
I suspect that with the way the optics on the Vive are designed to maximize FOV, a good amount of the rendered pixels from ED, and OLED display pixels are wasted for most people because they are outside of the HMD FOV. This causes Vive to have lower pixels per degree than Rift CV1, both in terms of rendered resolution and OLED display.

The rendered resolution part can be mitigated by increasing supersampling ratio above 1.4x. ED anti-aliasing currently isn't great. Improving anti-aliasing would go a long way towards improving IQ in the HMD.

ED actually looks great in the Vive with 2.0x supersampling and SMAA but not even 980 Ti can achieve 45-90 FPS with these settings.

If SteamVR or ED were to provide an option to render to a smaller viewport, we can run at higher SS/AA settings by not rendering what can't be seen.
 
Last edited:
I suspect that with the way the optics on the Vive are designed to maximize FOV, a good amount of the rendered pixels from ED, and OLED display pixels are wasted for most people because they are outside of the HMD FOV. This causes Vive to have lower pixels per degree than Rift CV1, both in terms of rendered resolution and OLED display.

The rendered resolution part can be mitigated by increasing supersampling ratio above 1.4x. ED anti-aliasing currently isn't great. Improving anti-aliasing would go a long way towards improving IQ in the HMD.

ED actually looks great in the Vive with 2.0x supersampling and SMAA but not even 980 Ti can achieve 45-90 FPS with these settings.

If SteamVR or ED were to provide an option to render to a smaller viewport, we can run at higher SS/AA settings by not rendering what can't be seen.

It doesn't make sense for HTC to design its HMD in a way that screen pixels are wasted (i.e. not visible). It doesn't make sense either from an engineering or from a business point of view.

There are two major factors affecting how a scene is viewed differently on the Rift and on the Vive. One is, indeed, the FOV and the second is the aberrations (of geometry and colour) of the different lenses.

Regarding the FOV, having same resolution on a wider (visible) FOV means that, indeed, the DPI (dots per inch) will be lower on the Vive compared to the Rift, hence, the perceived resolution will be lower. However, both HMDs require rendering with geometric aberrations in mind. Hence, if the game can be rendered to look good on the Rift, the techniques are available to make it look good on the Vive. It is a matter of optimisations in the rendering engine of the game.

Regarding the geometric aberrations, I would expect SteamVR library to take good care of that. If Frontier have written their own code to wrap the image and compensate for the lenses' aberrations, they have to optimize their implementation for each HMD separately.

I have a very good example from a similar (optimisation) issue. When I first bought a Macbook Pro, I was very surprised and disappointed by how badly Mac OS X looked on my existing, non-Apple, HDMI desktop monitor (1920x1200 pixels). I soon found out that Apple hadn't optimized the fonts and the UI elements of its OS for "Windows-compatible" resolutions! Notably, the particular Macbook Pro happens to run on a standard Intel i5 CPI with integrated graphics, i.e. same hardware as a PC. Still, it looks horrible for simply Apple -meant- not to optimize its OS to work well with non-Apple hardware. Same goes with its trackpad drivers for Windows. The Macbook Pro trackpad is smooth as silk when scrolling in Mac OS and of very low resolution (jumpy) in Windows, on the same machine. Notably, again, the drivers for Windows are also written by Apple, hence, they are meant to behave differently in Windows, compared to the Mac OS X.

I hope Frontier doesn't have good reasons not to optimize Elite Dangerous for the HTC Vive. I purchased ED specifically for the HTC Vive and, clearly, it doesn't make sense purchasing any ED upgrades until the game is optimized for the HMD of my choice. Assuming a software solution is technically feasible, I look forward to such a solution arriving soon. If it's a matter of hardware limitations, then it would also make sense for Frontier to explain such limitations in more detail, so as to allow customers to make better informed purchasing decisions.
 
Last edited:
On reddit they posted that they are working on the issue. They say it is not a bug but has to do with the nativ FOV of the vive. Personally i don't believe that. The resolution is way too low to be just a FOV issue. As long they are looking into the issue i am ok with it though. Time to play all the nice roomscale experiences and games and come back to ED once they fixed this issue.
 
I suspect that with the way the optics on the Vive are designed to maximize FOV, a good amount of the rendered pixels from ED, and OLED display pixels are wasted for most people because they are outside of the HMD FOV. This causes Vive to have lower pixels per degree than Rift CV1, both in terms of rendered resolution and OLED display.

The rendered resolution part can be mitigated by increasing supersampling ratio above 1.4x. ED anti-aliasing currently isn't great. Improving anti-aliasing would go a long way towards improving IQ in the HMD.

ED actually looks great in the Vive with 2.0x supersampling and SMAA but not even 980 Ti can achieve 45-90 FPS with these settings.

If SteamVR or ED were to provide an option to render to a smaller viewport, we can run at higher SS/AA settings by not rendering what can't be seen.

Did you check the Vlachos GDC video I posted? TL;DR: done right, there should be essentially no unseeable pixels - and this talk was given last year, so this should be business as usual.

If we can demonstrate that ED is drawing an excessively wide FOV to its undistorted render target, much of which is being discarded, and the remaining centre area scaled up to fill the displays, then we have found a performance loss and a cause of pixelation.

So we take the negative hypothesis, that ED is not drawing an excessively wide FOV, and attempt to prove it by finding someone who says he/she can see all the way to the edges ofhttp://i.imgur.com/IuJJfKi.png in a Vive.

Right now, all we have is you (/u/6626 on Reddit, right? and kwx saying you can't see beyond the orange circle and a lot of unused pixels outside it. This texture is the undistorted 1.4 scaled render target, which is first given a barrel inverse lens distortion, then downscaled to fit the display resolution. If only the ~900px within the visible circle are being sent to the displays, it's being subsampled by a factor of about 0.61 - which is similar to my best approximation of the preview window IQ using subsampling. Now you may be both be blessed with such high levels of testosterone such that your brow ridges mean your eyeballs just can't get close enough to the lens due, I don't know.

On reddit they posted that they are working on the issue. They say it is not a bug but has to do with the nativ FOV of the vive. Personally i don't believe that. The resolution is way too low to be just a FOV issue. As long they are looking into the issue i am ok with it though. Time to play all the nice roomscale experiences and games and come back to ED once they fixed this issue.

It's hard to collect evidence for, beyond subjective reports by people who have used both HMDs. My feeling is, as someone who has tried neither :D, is that the Vive's FOV is not that much greater than the Rift's, though, so you'd expect any difference in angular resolution to be minimal and more or a "ho hum, it's different" to a "omg ED is broken on Vive!" reaction, which seems to he the more common one.
 
Doc_Ok did an excellent analysis of Vive's FOV here: http://doc-ok.org/?p=1414

The eyes have to be 8 mm from the lenses to see Vive's full binocular FOV - 110° x 113° and take advantage of the entire OLED screen resolution. At lowest eye relief and comfortable fit, my eyes are probably around 20 - 25 mm from the lenses which gives me ~85° vertifcal FOV, so anything rendered outside of that FOV would be wasted for me.

To be able to see the full 113° vertical FOV, the user would for sure need to take off the foam and also have child-like eye socket depth. In fact, Vive owners on /r/vive are already modding their foams to minimize their width and get closer to the full FOV offered by the Vive.

Rift CV1's full binocular FOV is 94° x 93° at 12 mm from the lenses.

So with the Vive, even if ED is rendering at 1680 vertical resolution per eye and projecting that across 113° vertical FOV, most people won't be able to see more than 1400 of those pixels in their peripheral vision, and the sweet spot would only be using around 700 pixels.

I think the fix might end up having to be better anti-aliasing options in ED.
 
Last edited:
So with the Vive, even if ED is rendering at 1680 vertical resolution per eye and projecting that across 113° vertical FOV, most people won't be able to see more than 1400 of those pixels in their peripheral vision, and the sweet spot would only be using around 700 pixels.

The OLED panels used by both headsets are only 1080×1200 (x2).
 
I hope that ltx knows that and is talking rather abstractly about 'visible', ie in-fov super-sampling pixels. Otherwise divide everything by 1.4 and it will make sense in terms of screen pixels.
 
Did you check the Vlachos GDC video I posted? TL;DR: done right, there should be essentially no unseeable pixels - and this talk was given last year, so this should be business as usual.

If we can demonstrate that ED is drawing an excessively wide FOV to its undistorted render target, much of which is being discarded, and the remaining centre area scaled up to fill the displays, then we have found a performance loss and a cause of pixelation.

So we take the negative hypothesis, that ED is not drawing an excessively wide FOV, and attempt to prove it by finding someone who says he/she can see all the way to the edges ofhttp://i.imgur.com/IuJJfKi.png in a Vive.

AFAICT "the lab" renders a similarly large FoV.
 
I've also got issues with Vive, and no amount of playing with the graphics settings fixes it. Has anyone actually reported this as a bug? I couldn't find anything in the bug forum, so have raised one with a video of the issue I am having: https://forums.frontier.co.uk/showthread.php?t=243551. If anyone else is having the same issue, please add to the bug.

I only bought the Vive to play E: D. I hope FD can fix this soon.
 
I've also got issues with Vive, and no amount of playing with the graphics settings fixes it. Has anyone actually reported this as a bug? I couldn't find anything in the bug forum, so have raised one with a video of the issue I am having: https://forums.frontier.co.uk/showthread.php?t=243551. If anyone else is having the same issue, please add to the bug.

I only bought the Vive to play E: D. I hope FD can fix this soon.


I'm pretty sure those are heat waves from the ship and buggy and are an intentional graphic effect
 
Did you check the Vlachos GDC video I posted? TL;DR: done right, there should be essentially no unseeable pixels - and this talk was given last year, so this should be business as usual.

If we can demonstrate that ED is drawing an excessively wide FOV to its undistorted render target, much of which is being discarded, and the remaining centre area scaled up to fill the displays, then we have found a performance loss and a cause of pixelation.

I've seen the slides for that talk. I don't think Elite is doing anything wrong here.

I added a zoomed view to my album:

vKIxRUZ.png

As far as I can tell that's a close match to how it looks in the HMD. If done right, the distortion correction should do a roughly 1:1 pixel mapping in the center region, and that seems to be the case. The problem is that there just aren't enough pixels to cleanly render text, especially since the pentile display means that not all the pixels shown in the image are actually available.

Also see Edit 3 of my Reddit post, it seems that the Vive has almost exactly the same angular resolution (~10 pixels per degree) as the Rift DK2 in the center region. I don't have numbers for the Rift CV1, but I expect it to be a bit higher due to its slightly smaller FOV and more efficient use of the display area with its more square image. Even a small amount of extra resolution would help a lot for text legibility at the small text size used in the UI.

Right now, all we have is you (/u/6626 on Reddit, right? and kwx saying you can't see beyond the orange circle and a lot of unused pixels outside it. This texture is the undistorted 1.4 scaled render target, which is first given a barrel inverse lens distortion, then downscaled to fit the display resolution. If only the ~900px within the visible circle are being sent to the displays, it's being subsampled by a factor of about 0.61 - which is similar to my best approximation of the preview window IQ using subsampling. Now you may be both be blessed with such high levels of testosterone such that your brow ridges mean your eyeballs just can't get close enough to the lens due, I don't know.

I can see the orange circle when wearing the HMD normally, and a good bit more when I squeeze the foam hard or remove it entirely. My maximum single eye FOV includes all six air vents, and if I move my eye position around by shifting the headset sideways I can see more outside that. As far as I can tell there aren't any pixels being discarded.

Haven't checked my testosterone levels, but I did make the Neanderthal mod for Google Cardboard ;-)

So I do think that the Vive will only get its full FOV for people with more compatible face shapes since the thick foam padding doesn't let my eyes get close enough. I'm planning to use a VR Cover to make thinner padding as an experiment.

It's hard to collect evidence for, beyond subjective reports by people who have used both HMDs. My feeling is, as someone who has tried neither :D, is that the Vive's FOV is not that much greater than the Rift's, though, so you'd expect any difference in angular resolution to be minimal and more or a "ho hum, it's different" to a "omg ED is broken on Vive!" reaction, which seems to he the more common one.

I think there are multiple things going on:

  • The "circles on the pre-distortion view" visualization exaggerates the issue a bit. The distortion will keep the center at 1:1 ratio and shrink the edges down substantially, so you're not losing as much of the screen pixels as you might think. Also, the projection magnifies angles at the outer region, so a 10 degree step near the border looks much bigger than in the center. Of course, the actual rendered pixels correspond to the pre-distortion view, so the areas are relevant for determining rendering cost.
  • The Vive's face padding seems to create a too-large eye to lens distance for some people due to their face shape, this probably shrinks the sweet spot for a sharp image and reduces FOV. It won't change the angular resolution for the center area, but you'll see fewer pixels overall.
  • Getting a sharp image on the Vive is a bit tricky, it needs to be positioned on the face just right to get the center sharp. I suspect this is especially true for larger eye-to-lens distances.
  • I'm farsighted, and I seem to have a bit more trouble focusing with the Vive than the DK2. Not sure what the focal distance is set to (I think it was 1.3m for the DK2), but text looks noticeably clearer if I close one eye. I suggest trying that to see if it makes a difference.
  • Having to hit 90fps required turning off all antialiasing and supersampling (beyond the 1.4x one for the pre-distortion view) for all except the fastest GPUs, this causes more chunky pixels and shimmering. If you were playing with antialiasing or supersampling on the DK2, this would make it look like a downgrade.
  • The "VR" presets seem to aggressively reduce model and texture complexity at even moderate distances to speed things up, leading to the "minecraft" effect complaint that distant ships look bad.

Frontier said they're working on improving antialiasing which should help a lot - as mentioned in the GDC talk, it would be ideal to render the central region with lots of supersampling and keep the outer region at 1:1 or less to concentrate the rendering time where it makes the most difference.
 
Last edited:
All this FOV talk, when my friend and I swapped back and fourth between Vive and Rift with the same Eagle ship in Elite dangerous. and sat down and reset the view. His Rift CV1 had a larger FOV and showed more of the Hud. ( like out past the Mass Lock icons when looking forward ). We both noticed this. However for me I could read the mass lock perfectly clear on his Rift CV1 and its was a blurry mess on my vive..even if I moved my head to look at it.

He said he couldn't really notice a diffrence in display quality though. But I notice a MASSIVE diffrence. So I don't know why that is.. I have tried every adjustment imaginable with the VIVE and I cannot get it to look good. He wears glasses though and I do not if that makes any difference.
 
Last edited:
He said he couldn't really notice a diffrence in display quality though. But I notice a MASSIVE diffrence. So I don't know why that is.. I have tried every adjustment imaginable with the VIVE and I cannot get it to look good. He wears glasses though and I do not if that makes any difference.

Can you try closing one eye to see if that makes it clearer? I suspect the Vive may have a different focal length which could cause accomodation-vergence conflict for some people when looking at close UI panels. Maybe also try with glasses or contacts if you have them?
 
Can you try closing one eye to see if that makes it clearer? I suspect the Vive may have a different focal length which could cause accomodation-vergence conflict for some people when looking at close UI panels. Maybe also try with glasses or contacts if you have them?

I have tried that it doesn't make it clearer. I don't wear glasses or contacts. I did try removing the foam on the vive but it didn't make much diffrence. Its really weird because the CV1 Rift looks excellent to me, and that was with the IPD way off and nothing even adjusted i just threw it on and was like WOW this is how it should look?
 
Back
Top Bottom