Oddly, for me the Main Menu is the real killer..
.......
It has always annoyed me that the main menu pulls such a high load, I assumed it must be actually drawing a station in the background. Someone explained it a while back but I forget.
Oddly, for me the Main Menu is the real killer..
.......
You just said it yourself though, you can limit the frame rate and turn settings down thus reducing load on the GPU.
Note that i am not saying the game isn't broken just that it can be mitigated in some cases
It's not everyone though, there is some combination of hardware/software that is causing some machines to struggle and not others, I also run V-sync, mine is mostly hitting my 144hz limit in space at 1440p with everything maxed except motion blur which is off. I have dropped between 25 and 30% off my frame rate between Horizons and OE but other than that everything else has stayed the same.Which is why I run vsync - in Horizons it keeps me at 60fps (with it off I run in the hundreds so when FSSing in the black my GFX card runs like a space heater {FSS feature resolution linked to FPS - reasons}). In Oddity we don't have that luxury, only reaching 60fps in space, everywhere else the GPU load is at max and FPS is variable but well under the vsync 60FPS.
I salute you, o7 That was a very detailed and helpful explanation, and I am extremely grateful for you taking the time to explain the best approach to undervolting for me. A genuine and sincere THANK YOU.Every sample is a bit different, but the same basic procedure applies to any of the last several generations of NVIDIA hardware.
Get MSI AB, and instead of using any clock or voltage offsets, open up the frequency/voltage curve editor with CTRL+F. Pick a clock speed and voltage, then use shift+left click and drag to highight the entire curve after the voltage you've decided on as your limit, then click on of the points on the highlighted section of the graph and drag that entire section to some point below the clock speed of your chose voltage and 'apply'. Now you just adjust the points at your chosen voltage and lower.
For the 1080 Ti, 900mV @ 1900MHz is a good starting point (~90% of GP102 parts are sable here). As long as the highest point on the curve is 1900MHz, and that point is at 900mV, the card will not exceed that voltage or clock speed. Most 1080 Tis will be stable or close to stable here, but you'll need to do some stress testing to be sure (3DMark TimeSpy tends to crash out sooner than most tests and it's worth the five dollars so that you can loop it), and may have a bit more room to increase clocks, or reduce voltage...or you may need to pull back on clocks or allow the same 1900MHz to take a bit more voltage.
Do note that the curve offsets are heavily temperature dependent on these parts. If you adjust the curve while the card is cold, you will see lower clocks under load. Ideally, you tune the curve near peak temperature, with windowed 3D load running in the background to keep temps up and provide immediate feedback for performance/stability changes (I recommend Unigine Valley, for convience and ability to run at full speed when not in focus, it's not a particularly great stress test though).
The 1080 is close enough that the same numbers may also work, though it's a smaller GPU that will tend to clock a little higher at the same voltage.
A similar procedure will apply to every NVIDIA GPU from the 1xxx, 2xxx, or 3xxx lines (including the OP's 1660), though the specific clocks and voltages that are likely to be stable can vary.
This is true, but it's power consumption is positively tiny relative to the rest of the GPU, at least at the high-end, and without overriding the reduced power state it forces the memory into, it's probably less than the power consumption of the last 500MHz of my GDDR5/6X OCs on my 1080 Ti or 3080. Same goes for AMDs VCN equivalent.
None of the GPUs I've personally tested appreciably increase in power consumption when encoding with their hardware encoders while gaming, even at fairly extreme settings (4k60 ~100Mbps HEVC, or H264 in the case of my older Pascal parts). it's just so small an addition relative to the rest of the GPU.
Slower card with a capped frame rate...it will be proportionally much more relevant, but the power/temp would still have had to have been fairly borderline for NVENC to be the final straw.
That's generally what's meant by overheating today, when it has to throttle to avoid damage. It's a symptom of a problem with your build, not a problem with the software - there's something wrong with the cooling as it shouldn't be possible for software to do this to you even if it's deliberately trying. If you can identify which component has faulty cooling you can fix it, like via the benchmark tools I posted above. If it's the GPU throttling it probably just needs dusted since GPU manufacturers select appropriate heatsinks/fans for their hardware. If it's the CPU throttling and cleaning the heatsink doesn't fix it, you might need a different heatsink or to reinstall it.
I highly recommend you replace your thermal protection on both the CPU and GPU with "Thermal Grizzly's Conductonaut" ultra high performance liquid metal compound.
This stuff looks and acts like mercury, but it is not, because that is poisonous. It's a metal that stays liquid at room temps. It forms a void-less connection between the heat-sink and CPU/GPU. My CPU/GPU was running in the 60c-70c range before I switched to this compound.
This smells of the same sort of problem.
I think we're getting where Frontier needs to start acknowledging that the entire new rendering engine has huge problems.