How to get Elite in VR looking CRYSTAL CLEAR.:D

I downloaded the ED Profiler and matched your settings but it seems like no matter what I do when I set my Target Multiplier to 2, I get the jerkiness when turning my head in the same.

I set super sampling in the ED profiler to .65. Not sure if that works or not? I thought I read a post that it didn't work right but I am not seeing any alternate place to configure super sampling.

I am running an Nvidia founders edition 1080 with a Devil's Canyon i7 4.0ghz cpu.

Any suggestions?

The ED Profiler does not correctly set the in-game super sampling. This must be set manually in-game within the graphics options. Also, make sure you have Desktop Game Theater disabled.
 
Last edited:
One thing missing from the guide is a note on being CPU bound. I'm sitting with less that minimum spec - a 2700k oc'd ~ 3.8ghz, 970 Strix OC'd heavily. I was getting nasty headaches after about an hour of play when I tried.

I can't run much more than about 1.1 - 1.2 via the RenderTargetMultiplier. But I'll do some proper testing using the method outlined above and see if that gives me extra headroom.

Whether or not you are CPU bound in regard to reaching the minimum spec for VR (3770k), is up for debate. Believe it or not, your 2700K should be able to overclock better than the 3770K.

One thing I do notice is that when I try to play at 0.65/2.0 with my 6700k at its stock 4.0 GHz, jitter becomes apparent in places like stations. However, when I OC it to 4.7 GHz, the jitter vanishes. Reprojection at places like stations still kicks in, but because I'm able to maintain at least a minimum of 45 FPS, it can work as intended, keeping everything smooth and jitter free.

Perhaps, a jitter free environment is achievable with a lower clock, but I have yet to test a setting between 4.0 and 4.7. I figure that because I'm stable at 4.7, there isn't a reason to go any lower.

Go ahead and test out lowering the in-game SS setting along with whatever the highest Render Target Multiplier you can achieve. You may have to spend some time with variations of both settings in order to find the optimal performance/quality ratio. Good luck!
 
Last edited:
I've been running this setup since it was found on the DCS forums. If you can get over 4.4 GHz everything is buttery smooth.

However I'm also overclocking my 1080 up to around 2100 on the boost clock and that has definitely helped as well.
 
Thanks for the "Desktop Game Theater" tip. That removed made things really smooth. I run the same settings using a i3770 - 4.5ghz and 980ti overclocked to the limit.
 
@Z4.MAFIA, do you have reprojection switched on?

I'm on the 6700k at 4.6Ghz (can't go any further because Windows crashes) but I still need reprojection for spacestations and RE sites. I have the Asus ROG Strix GTX 1070 and overclocked using this guide https://rog.asus.com/forum/showthread.php?86398-Strix-GTX-1070-Overclocking-Guide

Have just enabled XMP to get the RAM running at 3000Mhz (it's Corsair Vengence 3000Mhz) and that seems to have added some grunt.

Thanks for all your advice on this thread.
 
@Z4.MAFIA, do you have reprojection switched on?

I'm on the 6700k at 4.6Ghz (can't go any further because Windows crashes) but I still need reprojection for spacestations and RE sites. I have the Asus ROG Strix GTX 1070 and overclocked using this guide https://rog.asus.com/forum/showthread.php?86398-Strix-GTX-1070-Overclocking-Guide

Have just enabled XMP to get the RAM running at 3000Mhz (it's Corsair Vengence 3000Mhz) and that seems to have added some grunt.

Thanks for all your advice on this thread.

No problem bro!

I do experience reprojection at stations, planets and RES sites but because I'm able to maintain at least 45 FPS everywhere I go, reprojection works as intended, keeping everything smooth. I spend most of my time in regular space PvP combat at true 90FPS glory so having to deal with reprojection in places I rarely go, isn't that bad. Even if I did spend more time in those places, the experience would still be smooth.

You shouldn't have to hit 4.7 GHz with your CPU or even 3200 MHz with your RAM. Imski reported that things are still smooth at a 4.4 GHz CPU clock. So, your 4.6 GHz should be plenty.
 
Blinking andriod tablet )
I'm getting ood results on my Vive with the following.
I7 x980 at 4ghz. Xmp memory is limited to 1600mhz but is at least tripple channel, my 1080 is running at 2.16ghz wiht memory at 5.1ghz (nice and pleasent 65 deg c)
For config - 2.0 render from withjng steam, 1.0 super samplingin ed and SMAA anti ailising. The latter really makes the biggest improvement.
Sett the hud to green with blue shields. That Orange is just nasty. Its a pitty the flight hud itself doesn't have an independent set of colour options to what is used for hud renderng in dock as the missions pictures render awful. It woudl be a nice quick fix for (at least me) is the worst part of graphic quality.
Also importantly (as I ket my son use my vie) settig the eye spacing correctly makes a big difference.
 
Thanks indeed for this thread, Z4.

I have recently bought an Oculus Rift CV1. I'm playing with an Nvidia 1080, i7 and 32 RAM (whole PC pretty much max spec at time of posting this, Aug '16).

Following your advice I've been using the debug tool extensively.

I've tried a wide combination of settings between in-game SS and the debug tool and overall find that I appreciate 0.65/2.0 the most, which allows the game to run at 90.0 fps and, to my eyes, seems clearer than 1.0/1.25 or any other combo.

Cheers!
 
Thanks for the tips. I followed your advice, and then did some testing of my own. I have a 1080 and a CV1.

I've found that if you really want to improve image quality, you need to use a combination of supersampling and anti-aliasing (FXAA or SMAA). Either one alone will leave extra jaggies. The AA costs about a 5% performance hit, but if you have the headroom, it may not decrease FPS in some situations.

The in-game supersampling and the debug tool have the exact same effect on FPS. So the main benefit of the debug tool is for values between 1.0x and 1.5x supersampling (you want to use something over 1.0x to reduce jaggies). I suspect they use the same algorithm.

If you use a large value in the debug tool and a low value in-game, there is a performance hit in FPS. You may not notice this in space, as it may be locked at 90 FPS, but you can see a difference in stations, on planets, and in combat. For example, if you use 2.0x and .65x (which equals 1.3x), your FPS in those situation may be about 30% lower than just using 1.3x and 1.0x (which also equals 1.3x - no duh). And as far as I could tell, the image quality looked identical. Btw, it seems that the larger the debug tool multiplier used, the more pronounced this effect is, so using 1.75x and .75x may only be a 10% reduction if FPS.

So if you are using in-game downsampling, you may want to try this - ymmv. [yesnod]


Note: all testing performed in an underground small hangar with default ultra graphics settings and ships lights on.
 
Last edited:
Thanks for the tips. I followed your advice, and then did some testing of my own. I have a 1080 and a CV1.

I've found that if you really want to improve image quality, you need to use a combination of supersampling and anti-aliasing (FXAA or SMAA). Either one alone will leave extra jaggies. The AA costs about a 5% performance hit, but if you have the headroom, it may not decrease FPS in some situations.

The in-game supersampling and the debug tool have the exact same effect on FPS. So the main benefit of the debug tool is for values between 1.0x and 1.5x supersampling (you want to use something over 1.0x to reduce jaggies). I suspect they use the same algorithm.

If you use a large value in the debug tool and a low value in-game, there is a performance hit in FPS. You may not notice this in space, as it may be locked at 90 FPS, but you can see a difference in stations, on planets, and in combat. For example, if you use 2.0x and .65x (which equals 1.3x), your FPS in those situation may be about 30% lower than just using 1.3x and 1.0x (which also equals 1.3x - no duh). And as far as I could tell, the image quality looked identical. Btw, it seems that the larger the debug tool multiplier used, the more pronounced this effect is, so using 1.75x and .75x may only be a 10% reduction if FPS.

So if you are using in-game downsampling, you may want to try this - ymmv. [yesnod]


Note: all testing performed in an underground small hangar with default ultra graphics settings and ships lights on.

Great information for all the CV1 users out there! Thanks for posting.
 
Thanks for all those infos.
Though my graphic card is an Asus STRIX GTX970-DC2OC-4GD5, and not a 1080, and my processor is an Intel Skylake Core i5-6600K / 3.5 GHz (Turbo Boost 3.9 GHz) and not your i7, so wonder if I will have to lower the settings in comparison with yours...
 
Thanks for all those infos.
Though my graphic card is an Asus STRIX GTX970-DC2OC-4GD5, and not a 1080, and my processor is an Intel Skylake Core i5-6600K / 3.5 GHz (Turbo Boost 3.9 GHz) and not your i7, so wonder if I will have to lower the settings in comparison with yours...
Well Imho, if you are using a CV1 (I know nothing about the Vive experience), you will need to reduce your settings. Even with a 1080, I have a hard time maintaining 1.3x supersampling with High grahics settings. Overclocking may help, but I don't understand how people can use 1.8x or 2.0x supersampling. It looks great in supercruise, but I'd think they would be only getting 30-40 FPS in a station, on planets in an SRV, and in crowded combat in a RES. Maybe it's not so bad pulling wanted ships out of supercruise and fighting in open space.

I'd recommend starting with 1.1x supersampling, FXAA anti-aliasing, and Ultra settings, and see how that works. Then lower your graphics settings if you need to. I can't tell the difference between Ultra and High, and have been using High for a year (with 2x 970 SLI and 1.5x SS before the Rift). One of the major performance hits is shadows, so if you can turn that down, it will help a lot, but I like shadows, so I don't want to turn them down more than one notch.

I think FD/Oculus, whoever, has done a wonderful job with their asynchronous timewarp, which fills in frames when the FPS drops. I can have FPS drop into the 50-60s in crowded combat in a RES, and I hardly notice it. My problem is I still can't take a low FPS on planets, and I'm not willing to change my graphics setting for different situations. So I'll live with lower supersampling and graphics settings until the hardware can fully support higher settings.
 
I posted this message in another post, but it would be much more appropriate here:

I'm having some issues with my setup: 2600K @ 4.6ghz, 980TI, overclocked, HTC VIVE, newest windows 10 update + nvidia drivers. (8-24-2016)

with .65 scaling in game and VR high settings.
with Chaperoneswitcher at 1.5 scaling with reprojection disabled.

I get 70-80fps which causes horrible jutter. My CPU usage is around 50%, GPU usage is a measely 50% also, it's not even boosting to my full OC!
What is bottlenecking so bad here? I doubt a 4.6ghz 2600k is bottlenecking this bad. On my 1440P monitor at maxed settings I can peg my 90hz refresh rate normally.

--------
with .65 scaling in game and VR high settings.
with chaperoneswitcher at 2.0 scaling with reprojection disabled.

I get 85-90fps at an asteroid belt fighting. LESS jutter but at hubs I get 45fps when docking.
GPU usage increases to 75%.


Everything I describe seems like a CPU bottleneck (low GPU usage, can't even come close to maxing the GPU). however I fail to see how a highly overclocking 2600k is holding this back?

Can anyone help?
 
I posted this message in another post, but it would be much more appropriate here:

I'm having some issues with my setup: 2600K @ 4.6ghz, 980TI, overclocked, HTC VIVE, newest windows 10 update + nvidia drivers. (8-24-2016)

with .65 scaling in game and VR high settings.
with Chaperoneswitcher at 1.5 scaling with reprojection disabled.

I get 70-80fps which causes horrible jutter. My CPU usage is around 50%, GPU usage is a measely 50% also, it's not even boosting to my full OC!
What is bottlenecking so bad here? I doubt a 4.6ghz 2600k is bottlenecking this bad. On my 1440P monitor at maxed settings I can peg my 90hz refresh rate normally.

--------
with .65 scaling in game and VR high settings.
with chaperoneswitcher at 2.0 scaling with reprojection disabled.

I get 85-90fps at an asteroid belt fighting. LESS jutter but at hubs I get 45fps when docking.
GPU usage increases to 75%.


Everything I describe seems like a CPU bottleneck (low GPU usage, can't even come close to maxing the GPU). however I fail to see how a highly overclocking 2600k is holding this back?

Can anyone help?

Have you tried training mode and compared the results? I've been seeing a lot of the same issue. Over the weekend after testing all different kind of settings and getting results like you I tried training mode and was able to hold 90fps solid. I didn't have enough time to try all of the training scenarios so it may have just been that the scenario I tried did not have enough assets. This got me thinking that maybe the net code is adding latency of a couple ms witch puts us just above the required 11ms or lower.

It is very frustrating when you look at your monitoring software and see 50% utilization for GPU and CPU and frame rate dropping. I know frontier would never do this but I wish they would just separate the VR version from the flat screen version. That way they could implement features that would benefit VR without concern to how it might impact the flat screen version. Elite is hands down the best VR experience out and I wish it could be developed as a native VR experience and not something trying to straddle both VR and flat screen.

The difference between a game built for VR verses one that has VR added is quite a difference.
 
I posted this message in another post, but it would be much more appropriate here:

I'm having some issues with my setup: 2600K @ 4.6ghz, 980TI, overclocked, HTC VIVE, newest windows 10 update + nvidia drivers. (8-24-2016)

with .65 scaling in game and VR high settings.
with Chaperoneswitcher at 1.5 scaling with reprojection disabled.

I get 70-80fps which causes horrible jutter. My CPU usage is around 50%, GPU usage is a measely 50% also, it's not even boosting to my full OC!
What is bottlenecking so bad here? I doubt a 4.6ghz 2600k is bottlenecking this bad. On my 1440P monitor at maxed settings I can peg my 90hz refresh rate normally.

--------
with .65 scaling in game and VR high settings.
with chaperoneswitcher at 2.0 scaling with reprojection disabled.

I get 85-90fps at an asteroid belt fighting. LESS jutter but at hubs I get 45fps when docking.
GPU usage increases to 75%.


Everything I describe seems like a CPU bottleneck (low GPU usage, can't even come close to maxing the GPU). however I fail to see how a highly overclocking 2600k is holding this back?

Can anyone help?

Making sure reprojection is turned on will eliminate jutter.
 
Back
Top Bottom