Vive Pre - I've got to be doing something wrong. Anyone else?

I have tried that it doesn't make it clearer. I don't wear glasses or contacts. I did try removing the foam on the vive but it didn't make much diffrence. Its really weird because the CV1 Rift looks excellent to me, and that was with the IPD way off and nothing even adjusted i just threw it on and was like WOW this is how it should look?

Odd. Just for completeness, is your Vive set to closest eye relief? That's the pair of dials you adjust by pulling out the strap joint. I assume you've also adjusted the IPD dial?
 
Last edited:
Odd. Just for completeness, is your Vive set to closest eye relief? That's the pair of dials you adjust by pulling out the strap joint. I assume you've also adjusted the IPD dial?

Yes my IPD is about 60. thats what my vive is set to. Yes the side dials are all the way in, tried everything on those too. tried tilting the headset, positioning it different on my face.. changing foam, removing foam. really nothing makes any changes if I start pulling the headset really far from my face it gets worse other than that it just looks blurry all the time. On my friends rift he has 70 IPD and I threw his headset on and forgot to change the IPD to 60 and it was still way better than my vive.
 
Two things I've noticed. At the extremities a quick side glance and I cannot see the boundaries of each sys pip. Even when I look directly at them its still only as clear ar DK2.
Also lights on the SS ping a little, but wow central vision combat and chaff is unreal.
 
Frontier said they're working on improving antialiasing which should help a lot - as mentioned in the GDC talk, it would be ideal to render the central region with lots of supersampling and keep the outer region at 1:1 or less to concentrate the rendering time where it makes the most difference.

A simple option might be to increase the masking to cover more of the outer FoV? It's not visible for most people anyway...

In terms of the text quality, it's worth noting that openVR doesn't have an equivalent of Oculus' 'layered rendering'. This (in theory) allows a developer to render the UI at a higher resolution and the compositor downsamples & merges it onto the headset.
 
Here's a camera view what the pixels look like. This is placed straight ahead after position reset. It should roughly match my earlier test image.

9FJAIwH.jpg

I think it looks fairly consistent with 1:1 pixel mappung on a pentile display, but hard to tell for sure.

Edit: I think this may be very relevant to what's going on. I get the impression that the current rendering ignores the pentile display's subpixel layout, so that you get pixels disappearing and reappearing as they move across the grid. If that's true, it should be possible to get better visual results by ensuring that colors get mapped to the closest subpixel that can display them.

Edit 2: not sure if I'm entirely convinced about my interpretation. Hard to tell without a direct side by side comparison, preferably with a test pattern. Anyway, I think it's plausible that improvements may be possible by more efficiently tuning the rendering / distortion to display subpixels.

Edit 3: Squinting at http://i.imgur.com/9FJAIwH.jpg and http://i.imgur.com/vKIxRUZ.png (not exactly the same view, but close), it does look like a 1:1 mapping in that the fuel gauge bar is about 9 pixels high in both images. There aren't any gaps in the orange line even though it's just one pixel thick in the input image, so the rendering does seem to use a subpixel-aware method to avoid losing thin structures. However, note that the 1-pixel line tends to light up 2-3 pixels vertically, so part of the issue may be that it's fairly aggressively blurring the image to avoid pentile artifacts.
 
Last edited:
I've seen the slides for that talk. I don't think Elite is doing anything wrong here.

<snip>

Also see Edit 3 of my Reddit post, it seems that the Vive has almost exactly the same angular resolution (~10 pixels per degree) as the Rift DK2 in the center region. I don't have numbers for the Rift CV1, but I expect it to be a bit higher due to its slightly smaller FOV and more efficient use of the display area with its more square image. Even a small amount of extra resolution would help a lot for text legibility at the small text size used in the UI.

<snip>

I can see the orange circle when wearing the HMD normally, and a good bit more when I squeeze the foam hard or remove it entirely. My maximum single eye FOV includes all six air vents, and if I move my eye position around by shifting the headset sideways I can see more outside that. As far as I can tell there aren't any pixels being discarded.

Haven't checked my testosterone levels, but I did make the Neanderthal mod for Google Cardboard ;-)

So I do think that the Vive will only get its full FOV for people with more compatible face shapes since the thick foam padding doesn't let my eyes get close enough. I'm planning to use a VR Cover to make thinner padding as an experiment.

That makes sense, and it's good to hear that there isn't significant overdraw going on.

Two tricks I would use in FD's shoes:

  • Have a 'render area' slider that reduces the render area to the minimum visible at a user's desired eye relief. This would free up performance for AA in the visible area.
  • Decrease (or make configurable) the radius of the cylinder that the cockpit UI is wrappped so that the target information, power distribution and side UI panels appear perpendicular to the viewer when centred. This will bring them closer, and by not drawign at an angle, further increase the horizontal resolution available to draw text. They might not coincide with the sweep of the cockpit model any longer, but it's worth it for usability.


I think there are multiple things going on:

  • The "circles on the pre-distortion view" visualization exaggerates the issue a bit. The distortion will keep the center at 1:1 ratio and shrink the edges down substantially, so you're not losing as much of the screen pixels as you might think. Also, the projection magnifies angles at the outer region, so a 10 degree step near the border looks much bigger than in the center. Of course, the actual rendered pixels correspond to the pre-distortion view, so the areas are relevant for determining rendering cost.
  • The Vive's face padding seems to create a too-large eye to lens distance for some people due to their face shape, this probably shrinks the sweet spot for a sharp image and reduces FOV. It won't change the angular resolution for the center area, but you'll see fewer pixels overall.
  • Getting a sharp image on the Vive is a bit tricky, it needs to be positioned on the face just right to get the center sharp. I suspect this is especially true for larger eye-to-lens distances.
  • I'm farsighted, and I seem to have a bit more trouble focusing with the Vive than the DK2. Not sure what the focal distance is set to (I think it was 1.3m for the DK2), but text looks noticeably clearer if I close one eye. I suggest trying that to see if it makes a difference.
  • Having to hit 90fps required turning off all antialiasing and supersampling (beyond the 1.4x one for the pre-distortion view) for all except the fastest GPUs, this causes more chunky pixels and shimmering. If you were playing with antialiasing or supersampling on the DK2, this would make it look like a downgrade.
  • The "VR" presets seem to aggressively reduce model and texture complexity at even moderate distances to speed things up, leading to the "minecraft" effect complaint that distant ships look bad.

Frontier said they're working on improving antialiasing which should help a lot - as mentioned in the GDC talk, it would be ideal to render the central region with lots of supersampling and keep the outer region at 1:1 or less to concentrate the rendering time where it makes the most difference.

This reassures me (as a not-yet-Vive-owner) that much of the problem can be ameliorated by performance, be it optimisations, faster hardware, a SteamVR implementation of ATW, without requiring us to wait for fantastical advances like Gen 2 HMDs, foveated renderings, or requiring FD to significantly redo the cockpit UI to give larger text sizes.
 
I think a very simple workaround would be proper support for changing the color scheme. Green or white text would be a lot more readable due to the pentile display, blue and red pixels have half the resolution of green. The current hack that matrix multiplies the UI colors works, but it has the nasty side effect of also changing things such as friend/foe colors on the radar that really shouldn't be modified along with text color.

Frontier, would it be possible to get some very basic theming support that lets us change colors individually? Maybe an XML file where we can override RGB colors? If some UI elements have hardcoded colors as part of assets this may be infeasible for those, but anything that is available for modification could help. For example, one approach would be to keep the current matrix approach, but reconfigure the friend/foe colors so that they look more like the intended colors after applying the color shift matrix. (Hm. Would it be enough to simply premultiply colors that should remain static by the inverse of the UI color shift matrix? That's of course assuming it's invertible, and it may result in color components less than zero or greater than one. On second thought this part may not work so well.)
 
Here's a camera view what the pixels look like. This is placed straight ahead after position reset. It should roughly match my earlier test image.


I think it looks fairly consistent with 1:1 pixel mappung on a pentile display, but hard to tell for sure.

Edit: I think this may be very relevant to what's going on. I get the impression that the current rendering ignores the pentile display's subpixel layout, so that you get pixels disappearing and reappearing as they move across the grid. If that's true, it should be possible to get better visual results by ensuring that colors get mapped to the closest subpixel that can display them.

Edit 2: not sure if I'm entirely convinced about my interpretation. Hard to tell without a direct side by side comparison, preferably with a test pattern. Anyway, I think it's plausible that improvements may be possible by more efficiently tuning the rendering / distortion to display subpixels.

This reminds me of how Xerox improved the apparent resolution of their Star GUI - the (monochrome) display used a stippled background, and to prevent ugly jaggies appearing when a document icon was drawn on top of it, where the diagonal corner of the upturned page coincided with the stipples, the icon placement was limited to even pixel numbers. Now we're back in an era of similar (sub)pixel scarcity for entirely different reasons, graphics devs are going to have to resort to similar trickery to get the best appearance out of the hardware.

On-topic: Sounds plausible, as is done by FreeType, ClearType, and whatever the Mac name for the subpixel rendering of fonts. I wonder how you do it though, run a pixel shader over the whole frame and joggle pixels around? Might be best applied only to UI layers.

Where's a Ben Parry type individual around to comment when you need one?
 
Last edited:
On-topic: Sounds plausible, as is done by FreeType, ClearType, and whatever the Mac name for the subpixel rendering of fonts. I wonder how you do it though, run a pixel shader over the whole frame and joggle pixels around? Might be best applied only to UI layers.

It gets a bit tricky for a head tracked view. A FreeType/ClearType approach that forces pixels into a grid away from their natural position may look really weird when combined with tiny head motions, it'll probably look as if the text is squirming around as you look at it since its movement won't exactly match where it's supposed to be. I was thinking more along the lines of a filter that lights up neighboring pixels a bit if the pixel it's supposed to draw isn't available, but this may be hard to do without making things look blurry.

Edit: it seems to be doing this already, see my Edit 3 in comment #145 above.
 
Last edited:
Edit: I think this may be very relevant to what's going on. I get the impression that the current rendering ignores the pentile display's subpixel layout, so that you get pixels disappearing and reappearing as they move across the grid. If that's true, it should be possible to get better visual results by ensuring that colors get mapped to the closest subpixel that can display them.

FDev Support told me that they are looking into improving text rendering specifically for the Vive, and that their current approach of rendering bright text over dark background doesn't work well with the Vive. I think that's related to what you are talking about here.

It's possible that for Rift CV1 Oculus SDK and/or ED are doing additional optimizations to improve IQ and compensate for things like lack of good anti-aliasing support in ED's engine, and of course these optimizations are not present for the Vive and SteamVR implementation.
 
I've just created a Wiki page where I've tried to collect current hypotheses and evidence for or against, see also the related Reddit discussion. I'd welcome contributions, and hope it helps get to the bottom of this issue.

/u/CrossVR found an interesting result, the game is rendering the UI text to a rather large texture but then downsampling it with a bilinear texture lookup without additional mipmaps, so (if I'm interpreting it right), this can end up losing information. Fixing this should make it possible to get approximately the text quality of 2x supersampling without paying the huge performance cost for that.

More here: https://www.reddit.com/r/Vive/comments/4fp2yq/wiki_page_for_elite_dangerous_issue_research/d2avh3q

Conclusion: I don't have time to go more in-depth, but I conclude that the text is indeed more blurry than it should be due to a texture filtering issue.

It is likely caused by the fact that they forgot to generate mipmaps for the text, even though they set the texture filter to one that expects mipmaps. Without mipmaps the text won't scale down properly and you get blurry text as a result.

My interpretation:

Interesting. According to MSDN docs:
D3D11_FILTER_MIN_MAG_LINEAR_MIP_POINT Use linear interpolation for minification and magnification; use point sampling for mip-level sampling.

If I'm interpreting it right, this means that it always uses a single mipmap instead of interpolating between two mipmaps. In this case since there is only one level, it'll always use that as the only available mipmap. Then, from this mipmap, it'll pick the four samples closest to the pixel center and make a weighed average.

This doesn't work well if the onscreen rendered size is significantly smaller than the original texture size since some parts of the source image may end up being ignored completely. I think that matches the effect visible in http://i.imgur.com/vKIxRUZ.png . Look at the horizontal part of the "L" in "LANDING GEAR" or the first "S" in "MASS LOCKED", you can see that some parts of the image seem to be disappearing instead of being averaged among close pixels.

Crude ASCII graphics:

++++++++++++++++++
++12++12++12++12++
++34++34++34++34++
++++++++++++++++++
++12++12++12++12++
++34++34++34++34++
++++++++++++++++++


Each "+" is a pixel in the source image, and the "1234" points are the ones being sampled and linear filtered for output pixels. Note that there are many "+" pixels that don't contribute at all to the output image, so that any information there gets lost.

The best way to fix this would be to either add mipmaps to ensure that it can properly downsample without losing information, or to dynamically adjust the size of the texture region used for drawing text to roughly match the pixel size after rendering.

Does that sound about right?

As far as I can tell this is all part of Elite: Dangerous's pipeline, so it ends when it hands off a rendered texture to the compositor. I'm still a bit suspicious about how the compositor maps the rendered texture in its distortion filter to get onscreen subpixels, but that would be a separate issue.
This reminds me - I saw an undocumented --nodistort flag for vrcompositor.exe. Can someone try if it's possible to add that for launching it as part of SteamVR to see if that makes a difference in sharpness? (I can't access my Vive just now.) Not sure if it works at all, or if doing so would keep a 1:1 mapping in the image center or if it'll get the scale completely wrong.

Many thanks for doing the investigation!
 
I've just created a Wiki page where I've tried to collect current hypotheses and evidence for or against, see also the related Reddit discussion. I'd welcome contributions, and hope it helps get to the bottom of this issue.
Cheers kwx. Nice precis of the issues there. Repped.

I'm sure the happy little oompa loompa FDevs are busily working on a solution. This has to be a priority for them to fix. Just think of the word of mouth advertising potential. If we can all bring our friends over and show them a rotating orbital in hi-def on the Vive, copies of the game will sell like hot cakes.

My Vive arrives in May and I'm confident they'll find a resolution.
 
Last edited:
There may be a fairly easy way to improve the aliasing issue with minimal effort. TL;DR: change the HMD image quality slider to allow more than 100% of the recommended render target size.

The currently available supersampling option has two problems. The smallest supported increase is 1.5x, and that is infeasible for all except the fastest GPUs since it's a 2.25x increase in pixels rendered.

The more significant issue is that supersampling happens on the undistorted image before it gets handed off to the compositor, so it gets downsampled first to the 1512x1680 grid, and then resampled again in the distortion step when scaling to screen pixels. That adds extra aliasing, so the image doesn't improve as much as it should.

Assuming that SteamVR supports input textures larger than the recommended size, increasing that size should solve both problems. The game could render at 1.25x size for a 1.56x increase in pixel count, and submit a 1890x2100 texture per eye. Then the distortion step has some extra pixels to work with when doing its resampling scaling.

As an added bonus, I think this would also reduce the UI texture undersampling issue found by /u/CrossVR since it would use more rendered pixels than the default scaling with 1.0x supersampling.

Yes, this will only work for people with faster GPUs, but the added cost isn't as high as 1.5x supersampling and I think the result will look better. And if I understand things right, this may be a very simple change for the game, and it seems low risk since the code change has no effect at all as long as people keep the slider at its default setting.

What do you think? FDev, does this seem feasible, and is there any chance of sneaking this into the upcoming 2.1 beta?

Edit: this assumes that the compositor API supports this and can do reasonable quality downsampling in the distortion correction step. If not, it would need some added support from SteamVR, analogous to the current high quality overlay.
 
Last edited:
Avtually, its not as bad as I'd initially thought. I had some issues with other vive games this morning, and went round a bit of a loop updating drivers, windows, drivers again... I captured the screen feed, and I'm uploading it now. I'll also do a capture with just a single eye, cropped to roughly the main forward view. Its playable - I even had a bit of combat in my first clip. Settings are mostly medium, didnt try to tune them in the slightest...
Video will be here once its uploaded and processed. I think its OK, close up and a comparison using Robot Repair queued up too.
https://www.youtube.com/watch?v=yjAku4ZVdr0
https://www.youtube.com/watch?v=iNeudLZ9xAA Cropped view
https://www.youtube.com/watch?v=pZ28KNy3wQU Robot Repair

The frame rate in Elite appears to be about 25fps, which I don't really believe, it doesn't feel bad (and I re-iterate, I have done zero optimisation here, and no overclocking - just water cooling of the graphics card)

Further thoughts on the rendering/aliasing - its only the centre ~20 degrees, some hundreds of pixels, where precise rendering is going to be effective - the peripheral region will be blurred a bit by the lenses. Presumably the game code or drivers can be tuned (given time) to give a render effort gradient (and this will be relevant to all HMD to some extent). Savings are big, a 1/3 grid translates to 8:1 pixel ratio.
 
Last edited:
Just played ED Arena after todays update.Looks amazing. Smooth frame rate ,no blurring or distortion.It looks so good!!!
 
Just played ED Arena after todays update.Looks amazing. Smooth frame rate ,no blurring or distortion.It looks so good!!!

Interesting, I don't recall seeing anything related to rendering in the patch notes, and Frontier tend to be rather conservative with minor point patches. I'll take a look when I get a chance. You're sure that it definitely looks better than earlier? FWIW, I haven't had any issues with framerate or distortion, just aliasing/jagginess combined with a bit of added blur from pixel remapping.
 
Just played ED Arena after todays update.Looks amazing. Smooth frame rate ,no blurring or distortion.It looks so good!!!

I've just played ED with the latest patch and I see no difference in the VIVE. All known issues are present, no noticeable improvement.
 
Last edited:
Interesting, I don't recall seeing anything related to rendering in the patch notes, and Frontier tend to be rather conservative with minor point patches. I'll take a look when I get a chance. You're sure that it definitely looks better than earlier? FWIW, I haven't had any issues with framerate or distortion, just aliasing/jagginess combined with a bit of added blur from pixel remapping.

I got my vive last week. Didn't go straight in to E: D, had a wee shot last week and it wasn't good. Checked it out in more detail tonight. Looks just the same to me. It's a shame it looks as bad on the Vive as it does tonight. I play Dwarf Fortress, I don't demand AAA graphics are not my thing, but they HAVE to be legible and fit for purpose. Text on the vive isn't brilliant on any game or application, but I manage to use virtual desktop and write emails and word documents. I've read around the various E: D and Vive issues reported on /r/elitedangerous and /r/vive and on the forums here, which suggests this game may never be brilliant, but it could at least be optimised for Vive better so it's playable. At the moment it's subpar. I definitely wouldn't demo it to anyone.

It was amazing flying through asteroid fields, and enjoying head tracking in combat after all the months of playing without such finery, but as it stands, I can't see myself making more effort to play E: D than just to check in to see if things have changed.

I'll hopefully get shot on a Rift sometime in the future to compare. Meanwhile, you'll find me enjoying roomscale titles elsewhere.
 
Back
Top Bottom