Graphics settings - Beyond Ultra

So I ended up landing on the latest NVIDIA Drivers v430.39 (had v417.71 before) and the difference in max- and average- FPS is massive. Either it is a driver improvement (I am always skeptical although I do enjoy the placebo effect of clean-uninstalling via DDU and reinstalling of new drivers) or rather I had something messed up somewhere and this update was able to clean the problems.

I noticed after the NVIDIA driver update, my custom 2560x1440 @ 75Hz video setting was removed (as expected) so ED ended up settling at the usual 2560x1440 @ 60Hz (VSync On, still). Maybe it was that, maybe not.. but anyway it looked like I had a 2x more powerful graphics card, for sure, at the same XML settings.


As far as NVIDIA 3D settings go, here is what I have:

In Program Settings (elitedangerous.exe)
📗 Anisotropic Filtering: 16x
📗 Texture filtering - Quality: High Quality

📗 Threaded Optimisation: ON
📗 Maximum pre-rendered frames: 1 or "Use the 3D Applicaiton Settings" (*)
(*) Quoting Morbad: Maximum pre-rendered frames = 1 does not help performance and will actually increase CPU utilization slightly. However it does limit the render queue to one thread, which minimizes overall input latency and so I almost always force this to 1.

In Global Settings
📗 DSR - Factors: 4.00x (Native Resolution)
📗 DSR - Smoothness: 0% (alternative combo: 33%)
📗 Power management mode: Prefer maximum performance



I do get the best performance at a final resolution of 2560x1440, naturally. This can either be a 4x DSR with 0.50 in-game SuperSampling or a 1x DSR with 1x in-game SS (obviously). The latter is crispier and more defined and readable but I can't say I do like it better. The former makes things a bit blurrier but it looks less like videogame polygons and more like "watching something" out of a video. I am sure some people would call me crazy for this but it's especially noticeable in things like the huge station "piston/pylons" and rounded structures.


For quality settings I have everything maxed out with Morbad's settings for shadows as found here:

I am scared to find out whether it was the 75Hz setting making things weird (forcing a higher average FPS than my card can take) and anyway I don't wanna risk it ;)
 
Last edited:
I was curious to know: is there any way to tweak values of bloom, ambient occlusion and depth of field, from within the XML files, just like it was done with the shadows by Morbad?

Thanks,
 
So I ended up landing on the latest NVIDIA Drivers v430.39 (had v417.71 before) and the difference in max- and average- FPS is massive. Either it is a driver improvement (I am always skeptical although I do enjoy the placebo effect of clean-uninstalling via DDU and reinstalling of new drivers) or rather I had something messed up somewhere and this update was able to clean the problems.

If you have an RTX/Turing part, drivers are still being optimized for them and that could explain modest performance changes. However, bad data in the NVIDIA shader cache can also cause issues. You can empty this by going to Program Data\NVIDIA\NV_Cache (or something close) and deleting it's contents.

📙 Threaded Optimisation: Auto ⬅ ON is better? Not sure...
📙 Maximum pre-rendered frames: 1 ⬅ I think I found this tip on an old post by Morbad from JAN2018...

Threaded Optimation should default on, but I force it on anyway. Setting it off makes the video driver use a single thread, which will almost always hurt performance in ED, unless you have only a few cores and they are extremely fast cores (something like a modern non-hyperthreaded quad core at around 5GHz might be a good scenario for forcing it OFF).

Maximum pre-rendered frames = 1 does not help performance and will actually increase CPU utilization slightly. However it does limit the render queue to one thread, which minimizes overall input latency and so I almost always force this to 1.

📙 Texture filtering - Anisotropic sample optimisation: Off ⬅ ON might be useless? Not sure...
📙 Triple buffering: Off ⬅ ON might be better for VSync ON settings? Not sure...

Anisotropic Optimization cannot be enabled with texture filtering quality set to High Quality and doesn't do much anyway.

That triple buffering setting is only for OGL and is not applicable to Elite: Dangerous, which is a D3D11 app.

If you want to use vsync, I highly recommend forcing the sync type to "Fast" in the NVIDIA control panel.

I am scared to find out whether it was the 75Hz setting making things weird (forcing a higher average FPS than my card can take) and anyway I don't wanna risk it ;)

It shouldn't hurt anything and if it does cause issues, it's easy enough to delete the custom resolution.

I was curious to know: is there any way to tweak values of bloom, ambient occlusion and depth of field, from within the XML files, just like it was done with the shadows by Morbad?

Bloom and ambient occlusion have a modest number of customization options in the xml (and I believe they can all work from the override), but DoF is mostly just a toggle.
 
I have a 1080ti and I have everything on Ultra and 4k. Super sampling is at 1.
If I increase super sampling any higher than that I get frame rate issues (in stations).
 
Thanks Morbad! That was a super helpful post. I did update my post with the settings and I am giving the new changes a try right now!
 
For everyone wondering, the NVIDIA cache is stored here:

Code:
C:\ProgramData\NVIDIA Corporation\NV_Cache

In theory, you should not clean it, because it builds itself and saves you time and might benefit for smoothness/FPS/load times/reduced stuttering etc. etc.
but my guess is that if you are tweaking stuff around, messing with settings, shaders etc. a clean will not hurt!
 
I have a 1080ti and I have everything on Ultra and 4k. Super sampling is at 1.
If I increase super sampling any higher than that I get frame rate issues (in stations).

4k is about as high a resolution as a 1080 ti will handle well at ultra+ settings.

Any supersampling setting (above 1.00) will increase the internal render resolutions accordingly and be that much more demanding. If you have a 4k display, there really isn't much you can do that will improve upon the native resolution without destroying frame rate.

Thanks Morbad! That was a super helpful post. I did update my post with the settings and I am giving the new changes a try right now!

Another note regarding ambient occlusion:

The SSAO settings in the xml are all depreciated. I used to have these heavily tweaked, but they always gave me grief, and as of 2.2 or so, SSAO was disabled entirely infavor of HBAO. So, all ambient occlusion settings, to the best of my knowledge, are now part of the HBAO settings.
 
Last edited:
Thanks again! I did try Vsync FAST but I don't think it's right for my setup.. things seemed smooth but somehow "glued" and "chunky".. it's hard to explain. Bear in mind in my system I can get around 51-54 fps in stations with heavily-tweaked settings (although I must say it's improved a lot).

I like to use Vsync because I am usually barely over 60fps and it causes a lot of tearing since my monitor is at 60Hz. I'd rather keep the average up and cap at 60fps than go a bit over in some areas and have lots of tearing.

I tried VSync Adaptive and it seems to be better in my case. Super smooth, no slowdowns with everything at the highest settings, your Shadows tweaks, SS 1.00x and MSAA. Planetary work at 100%, maybe 80% will be better. I did clean my shaders since I removed ReShade, I wanted a clean sleigh and it seems to improve things. You need around 30 minutes of gameplay to enjoy smoother frame rates..at least this was my experience.

Also, I did set pre-rendered frames (for a fun test) to 4... I can't see any input lag but I am sure this is also responsible for the smoother frame rates and gameplay I am experience, from asteroid belts to stations to deep space. Much less variance of fps average from one scenario to another.
 
I just recorded a short video to show some flickering horizontal lines that I've been having for quite a while, maybe this is normal ED behavior?

 
Thanks again! I did try Vsync FAST but I don't think it's right for my setup.. things seemed smooth but somehow "glued" and "chunky".. it's hard to explain. Bear in mind in my system I can get around 51-54 fps in stations with heavily-tweaked settings (although I must say it's improved a lot).

Vsync on fast really only does work well with higher frame rates.

Also, I did set pre-rendered frames (for a fun test) to 4... I can't see any input lag but I am sure this is also responsible for the smoother frame rates and gameplay I am experience, from asteroid belts to stations to deep space. Much less variance of fps average from one scenario to another.

If it's responsible for improving smoothness, it's also introducing input latency. It may well be below your threshold of perception, but the latency from four frames render ahead is quite significant in absolute terms...about 67ms at ~60fps. This would essentially double the latency of most reasonably well tuned setups vs. a single frame queue. Having 100ms+ total input to screen lag is probably fine for most stuff, but would be too much for combat, for me.

I just recorded a short video to show some flickering horizontal lines that I've been having for quite a while, maybe this is normal ED behavior?


Could be insufficient VSR smoothing or an artifact of the in game AA setting. I'll have to see if I can reproduce it.
 
Vsync on fast really only does work well with higher frame rates.
That's also what I experienced in my tests.

If it's responsible for improving smoothness, it's also introducing input latency. It may well be below your threshold of perception, but the latency from four frames render ahead is quite significant in absolute terms...about 67ms at ~60fps. This would essentially double the latency of most reasonably well tuned setups vs. a single frame queue. Having 100ms+ total input to screen lag is probably fine for most stuff, but would be too much for combat, for me.
Agreed, keep in mind that my rig is a bit of a compromise, being based off a computer I had laying around and by all means represent no "actual" sleek gaming rig.

I am playing on an:
Intel(R) Xeon(R) W3540 @ 2.93GHz, 3059 Mhz, 4 Core(s), 8 Logical Processor(s)
12GB RAM
Samsung EVO 850 250Gb SSD
ZOTAC NVIDIA 1060 3GB


In-game settings as follows:
Everything at highest setting, Morbad's (yours) shadows settings which is basically v10 of the GraphicsSettings/Override (I am sure I have both the latest you posted). FXAA AA, Supersampling 1.00x, no NVIDIA DSR/smoothing. 16x Anisotropic, High Quality Settings, Threaded Optimization ON. I also keep Blur/DoF/AO on and/or at highest settings when a choice is available.

I tried to step up to 1.50x in-game SS, fiddling with AA, fiddling with VSync.. but nothing changed, the flickering is still there.. I think only DSR could really fix this by going 4.00x and Smoothing 0%, in-game SS at 0.50x or 0.65x as you had suggested. I gave up DSR lately because alt-tabbing was being painful :)

I will try DSR at the settings I wrote above and re-do the tests..

Could be insufficient VSR smoothing or an artifact of the in game AA setting. I'll have to see if I can reproduce it.

Many thanks, this is really great help!



UPDATE: Here is test 2, this time at DSR 4.00x Smoothing 0% and in-game SS 0.50x ... same issues:

 
Last edited:
Update on the flickering problems..
I was able to fix it using Nvidia Inspector and use the settings below. These were not invented or discovered by me, but rather by Keem85, originally for Project Cars.

They were meant for VR but they work great, in my opinion, in 2D as well. The game looks sharper without being a "sharpening effect".. text is more readable, graphics are more crisp and, funny thing, antialiasing works better as jagged lines are smoother. It's funny how sharp things in ED (jaggies) become smoother and the rest becomes crispier.

Anyway, feel free to try it, you might like it. These settings became a sort of go-to test for many other video games, first in iRacing games and then even in ED. I am sure someone in here already gave them a try.

The trick is in the negative LOD bias. The more you go negative, the sharper things become. Some people like anything from -1.5 to -0.85 etc.. I am liking -2 at the moment. Remember to input all the 4 decimals even if "0", just for safety.

I like to say that it makes the game look the way "it looks on the advertised developer's cutscenes and gaming scenes", when you question why your game does not as good as theirs :)


135679


Source: https://www.reddit.com/r/simracing/comments/73vq7n Source: https://www.reddit.com/r/simracing/comments/73vq7n/i_found_a_way_to_sharpen_project_cars_2_in_vr/
 
where might I find the current config everyone is referring to? I'm curious to try this out as I also play on a 1080, however I only have a 1080p display. would these setting improove anything for me?
 
Top Bottom