Taking heat damage : Thermal Throttle :(

You just said it yourself though, you can limit the frame rate and turn settings down thus reducing load on the GPU.

Note that i am not saying the game isn't broken just that it can be mitigated in some cases

Which is why I run vsync - in Horizons it keeps me at 60fps (with it off I run in the hundreds so when FSSing in the black my GFX card runs like a space heater {FSS feature resolution linked to FPS - reasons}). In Oddity we don't have that luxury, only reaching 60fps in space, everywhere else the GPU load is at max and FPS is variable but well under the vsync 60FPS.
 
Which is why I run vsync - in Horizons it keeps me at 60fps (with it off I run in the hundreds so when FSSing in the black my GFX card runs like a space heater {FSS feature resolution linked to FPS - reasons}). In Oddity we don't have that luxury, only reaching 60fps in space, everywhere else the GPU load is at max and FPS is variable but well under the vsync 60FPS.
It's not everyone though, there is some combination of hardware/software that is causing some machines to struggle and not others, I also run V-sync, mine is mostly hitting my 144hz limit in space at 1440p with everything maxed except motion blur which is off. I have dropped between 25 and 30% off my frame rate between Horizons and OE but other than that everything else has stayed the same.

I have no idea why that might be the case of course, there is nothing special about my setup.
 
Every sample is a bit different, but the same basic procedure applies to any of the last several generations of NVIDIA hardware.

Get MSI AB, and instead of using any clock or voltage offsets, open up the frequency/voltage curve editor with CTRL+F. Pick a clock speed and voltage, then use shift+left click and drag to highight the entire curve after the voltage you've decided on as your limit, then click on of the points on the highlighted section of the graph and drag that entire section to some point below the clock speed of your chose voltage and 'apply'. Now you just adjust the points at your chosen voltage and lower.

For the 1080 Ti, 900mV @ 1900MHz is a good starting point (~90% of GP102 parts are sable here). As long as the highest point on the curve is 1900MHz, and that point is at 900mV, the card will not exceed that voltage or clock speed. Most 1080 Tis will be stable or close to stable here, but you'll need to do some stress testing to be sure (3DMark TimeSpy tends to crash out sooner than most tests and it's worth the five dollars so that you can loop it), and may have a bit more room to increase clocks, or reduce voltage...or you may need to pull back on clocks or allow the same 1900MHz to take a bit more voltage.

Do note that the curve offsets are heavily temperature dependent on these parts. If you adjust the curve while the card is cold, you will see lower clocks under load. Ideally, you tune the curve near peak temperature, with windowed 3D load running in the background to keep temps up and provide immediate feedback for performance/stability changes (I recommend Unigine Valley, for convience and ability to run at full speed when not in focus, it's not a particularly great stress test though).

The 1080 is close enough that the same numbers may also work, though it's a smaller GPU that will tend to clock a little higher at the same voltage.

A similar procedure will apply to every NVIDIA GPU from the 1xxx, 2xxx, or 3xxx lines (including the OP's 1660), though the specific clocks and voltages that are likely to be stable can vary.



This is true, but it's power consumption is positively tiny relative to the rest of the GPU, at least at the high-end, and without overriding the reduced power state it forces the memory into, it's probably less than the power consumption of the last 500MHz of my GDDR5/6X OCs on my 1080 Ti or 3080. Same goes for AMDs VCN equivalent.

None of the GPUs I've personally tested appreciably increase in power consumption when encoding with their hardware encoders while gaming, even at fairly extreme settings (4k60 ~100Mbps HEVC, or H264 in the case of my older Pascal parts). it's just so small an addition relative to the rest of the GPU.

Slower card with a capped frame rate...it will be proportionally much more relevant, but the power/temp would still have had to have been fairly borderline for NVENC to be the final straw.
I salute you, o7 That was a very detailed and helpful explanation, and I am extremely grateful for you taking the time to explain the best approach to undervolting for me. A genuine and sincere THANK YOU.
 
That's generally what's meant by overheating today, when it has to throttle to avoid damage. It's a symptom of a problem with your build, not a problem with the software - there's something wrong with the cooling as it shouldn't be possible for software to do this to you even if it's deliberately trying. If you can identify which component has faulty cooling you can fix it, like via the benchmark tools I posted above. If it's the GPU throttling it probably just needs dusted since GPU manufacturers select appropriate heatsinks/fans for their hardware. If it's the CPU throttling and cleaning the heatsink doesn't fix it, you might need a different heatsink or to reinstall it.

I have a R9-3900XT (40c) and a RX-5700XT (50c) stable with Odyssey cranked to Ultra with Vsync off (90-140fps)

With V-Sync On (30fps) the temps drop to 40c on both with normal fan speeds.

I highly recommend you replace your thermal protection on both the CPU and GPU with "Thermal Grizzly's Conductonaut" ultra high performance liquid metal compound.

This stuff looks and acts like mercury, but it is not, because that is poisonous. It's a metal that stays liquid at room temps. It forms a void-less connection between the heat-sink and CPU/GPU. My CPU/GPU was running in the 60c-70c range before I switched to this compound.

As well, I have a brick on my CPU, a "Dark Rock Pro 4" high performance 250w CPU cooler. With the compound above, this CPU cooler keeps my CPU stable at 40c-60c under extreme system usage. With Odyssey, I was not able to get the frame-rates I wanted, given the broken code and this combo let's me play with stable temps and settings at Ultra with V-Sync off with minimum temparture problems.
 
Last edited:
under custom loop, 2h gameplay.
Games manage to peek my card at 300W..nice i have a good cooling solution. Interesting the averages, CPU is pretty underused at 31.1% average.
Room temp is 22 - 21 Celsius
bTfD5IC.jpg
 
This is not good. Are people here old enough to remember the emergency firmwares put out a decade ago when furmark loaded GPUs in a way the GPU manufacturers hadn't foreseen, and roasted the VRMs? This smells of the same sort of problem. I think we're getting where Frontier needs to start acknowledging that the entire new rendering engine has huge problems. I'm starting to think that the LONG TERM best solution for Frontier would be a recall-and-refund. It will be short-term expensive, but will give them a HUGE amount of goodwill from a community that WAS among the industry's best. Continuing the "not at launch" kind of business where they fight talon and beak against any refunds will possibly give them a cent or two of extra short term income, but it will cost them a LOT in future sales because not only are their playerbase leaving in all directions (as is some of their most important content creators, even some of those they bribed with cheap plastic and glassware), the faith in the company and any of their future products is going down faster than their stock value. That would probably mean selling any future title will be an uphill battle, because the gaming community remembers stuff. Trust is hard won, and easily lost.
 
I highly recommend you replace your thermal protection on both the CPU and GPU with "Thermal Grizzly's Conductonaut" ultra high performance liquid metal compound.

This stuff looks and acts like mercury, but it is not, because that is poisonous. It's a metal that stays liquid at room temps. It forms a void-less connection between the heat-sink and CPU/GPU. My CPU/GPU was running in the 60c-70c range before I switched to this compound.

There are a lot of contraindications to and considerations one should be aware of before using liquid metal thermal interface materials. They are highly electrically conductive, so can't be allowed to come into contact with anything that could short out. The gallium in them rapidly and destructively alloys with aluminum, so it cannot be allowed to come near any aluminum components, or even residues of old TIMs that contain metallic aluminum or alumina particles. It alloys with bare copper and will often loose enough volume over a period of months to years that it can 'dry up' (actually being absorbed by the copper surface) and compromise cooling performance...there are ways to mitigate this, but they are beyond most consumer knowledge/patience.

About the only problem free way to use these gallium-indium alloy liquid metal TIMs long term is between two extremely clean nickel plated surfaces, or a nickel (or gold) plated surface and bare silicon. Even then, they can stain heat spreader surfaces and void CPU warranties. Some people get away with using them on untreated bare-copper, but many of these will eventually encounter the aforementioned absorption problem, or have to use so much that they run the risk of having excess drip onto something it shouldn't touch.

They do perform well--especially on high thermal density parts in direct contact with a cooler base--but they should not be necessary on components not already using them out of the box. If the few degrees saved by swapping out a good traditional TIM for a liquid metal one is enough to make the difference on a stock part, something else was wrong.

This smells of the same sort of problem.

Except that the kinds of loads here aren't unforseen, aren't dramatically beyond the norm, and aren't bypassing any limiters already in place to preserve the longevity of parts.

A normally functioning system at stock settings is just going to run Elite: Dangerous at slightly lower clocks than it runs most other games, because it will hit the rather conservative default power limits sooner than most other games.

I think we're getting where Frontier needs to start acknowledging that the entire new rendering engine has huge problems.

At least when running uncapped, Odyssey isn't loading GPUs more heavily than Horizons or Vanilla ED could. It's just doing so at lower frame rates so all these people running 60Hz vsync are actually seeing significant GPU load when they wouldn't before.
 
Back
Top Bottom