100% VRAM usage regardless graphical settings?

Allocated and used VRAM is completely different. And a common misconception why people think they need more VRAM.

This. You want as much allocated as possible, the amount used is the bit where running out of space becomes an issue.

In Horizons my VRAM allocation would steadily grow over long sessions, starting around 6Gb & sometimes ending up over 10Gb (11Gb 1080ti). In Odyssey it's usually stable around 6-7Gb, most I've noticed (on task manager) is 7.8 I think.
 
Using memory is a good thing. Not a bad thing. It is similar to fuel up your cars tank with petrol. Now when your fuel tank is full of petrol you say: "oh my god, all my fuel tank is full of petrol, I am screwed up!"
 
Using memory is a good thing. Not a bad thing. It is similar to fuel up your cars tank with petrol. Now when your fuel tank is full of petrol you say: "oh my god, all my fuel tank is full of petrol, I am screwed up!"
But its not using it, just allocating it. You may get the illusion it is being used tho.
 
Ok, update; I tried to balance the workload between cpu and gpu. Made it render 5 frames before sending it to gpu. Slight improvement. Cpu 33% (man, you're ripped!) and gpu still at 100%. In CZ on foot around 40-ish fps. It looks like PS2 so...

Conclusion: I'm going back to Horizons during the summer, it's simply not ok trying to decipher effed up pixels in order to have an acceptable performance.
Conclusion 2: Horizons is amazingly optimized. It really is. With my medium rig with SS 1.5 it runs like Usain Bolt. Somewhere there is a hickup in the pipeline. I refuse to believe that adding animated avatars tanks the system this much. Rabble out.
 
Last edited:
I disabled hyperthreading on my i72600 so it ran only 4 cores and it droped my vram usage by about 20% on my gtx 1660 it at 1440p I get about 40 fps instations and 50 to 60 walking on planets. but this in turn spiked my cpu usage to 90%
 
I disabled hyperthreading on my i72600 so it ran only 4 cores and it droped my vram usage by about 20% on my gtx 1660 it at 1440p I get about 40 fps instations and 50 to 60 walking on planets. but this in turn spiked my cpu usage to 90%
This is interesting. I've been told that killing HT or (SMT on AMD) is always (insert reverb here) wrong. Maybe the Cobra engine benefits from a higher single core boost. It doesn't hurt to test it, but on the other hand given the great cpu-utilization on Horizons, this should NOT be the answer.
 
This is interesting. I've been told that killing HT or (SMT on AMD) is always (insert reverb here) wrong. Maybe the Cobra engine benefits from a higher single core boost. It doesn't hurt to test it, but on the other hand given the great cpu-utilization on Horizons, this should NOT be the answer.

I wonder if the issue you're describing is related to this article:
FYI, not the entire thread - just the post this URL links to. Thanks.
 
Last edited:
SMT is not just on AMD; it's the generic term for what Intel termed HT (and they certainly weren't the first to do it, effectively all GPUs, and CPUs such as XMOS and SPARC also do it). SMT is effectively a workaround for memory latency; it allows the processor to compute in a different context when a cache miss occurs. The downside being that it then has less cache available per thread, so more cache misses will occur. So, for instance, if we have one particular thread that's working intensely a dataset that's just barely large enough to fit in L2 cache, allowing the core it runs on to hold another thread will cause performance loss.
 
@lone Tech Yes we know. Back to iron out the real issue.
ediT Sorry for being blunt but I 'compose' my post to suit everyone, therefore I forego the usual tech mumbojumbo. Working with rendering for over 20 years now. My apologies.
 
Last edited:
When I load the game you can see for a couple of seconds it loads the high textures(around me) but after the game loads it goes back to the blurry ones.

Chances are it prioritized other assets, actually ran out of VRAM, and was forced to evict things that were still useful. This is an actual example of VRAM contention.

This is interesting. I've been told that killing HT or (SMT on AMD) is always (insert reverb here) wrong. Maybe the Cobra engine benefits from a higher single core boost. It doesn't hurt to test it, but on the other hand given the great cpu-utilization on Horizons, this should NOT be the answer.

SMT so rarely harms things that one should have a damn good and specific reason to disable it. Disabling HT on a 2600K generally won't allow higher boosting either.

ED is not especially well threaded, at least it's renderer isn't. If the VRAM allocation reported is actually influenced by HT on ss47ply's setup, chances are it's from the NVIDIA driver not spawning extra worker threads when it doesn't detect enough free logical cores to make use of them. The extra threads usually help performance if there is a CPU limitation, but they also have overhead.

Source: https://www.youtube.com/watch?v=nIoZB-cnjc0


SMT is effectively a workaround for memory latency; it allows the processor to compute in a different context when a cache miss occurs.

A cache miss can cause a pipeline stall that the second hardware thread can fill, but there will generally be significant gaps in execution even without cache misses. Modern performance x86 cores are quite wide and very few applications can make good use of all that ILP.

But yes, SMT can increase cache contention, and one of the few times it's recommended to disable it (or enforce affinity to similar effect) is when an app is well tuned enough to not leave gaps in ILP and/or cannot handle significant cache contention without losing more performance than would be gained from executing extra thread.
 
Its not just planets... when I open up the Main Menu... I can see how the textures are being loaded on my ship: first low, medium, high (regardless of my settings) - similar to the planets ground texture tile loading - like layers are being added. I think I'll go back to Horizons because there I can play it on Ultra without any of these glitches and everything looks sharp instantly. I hope its an optimization issue and it get fixed because GPU's are very expensive now because of the crypto mining, covid and tariffs. My current GPU I bought half the price 2 years ago then what it is now. Everything on the GPU market is so expensive and rare. I am just sad that people with mid builds like myself are being left out of the picture. Like you need a really good GPU with a 8 - 12 GB VRAM to enjoy the game because immersion is everything in this game and nothing kills immersion quicker than seeing those low res textures being loaded and swapped around with the mid - high res textures.
 
Last edited:
I've made some particular observations (back and forth looking at the lifts, yet again) and the game actually has more level of detail models than is useful. In that particular case, it turns out all of the lower detail models are bad, too; they just have fewer polygons, never mind preserving contrast edges. So we have visual edges turning into zigzags, first in the claspy bits, then the lights, and the entire walls. The thing should have had a low detail model with a tesselation shader to reproduce the cylindrical shape as needed. It's a rush job. Regarding ground textures, those should mostly be procedural, not giant blocks of image, perhaps using something like a sparsely mapped mipmap with fallback to lower detail. Well, the fallback works (unlike the planets blinking out to black, but that was just patched), but the demand estimation doesn't, and it's evicting what we need in favour of something else.
 
I can tell people EDO runs with less than 4GB on Ultra in 4K. I filled my 3080 FE with 6GB prefixed before launching EDO. No difference.

Even AAA games barely use above 6GB for the top end.
 
Last edited:
Back
Top Bottom