Taking heat damage : Thermal Throttle :(

Should be noted that modern GPUs and CPUs can (and should) function at much higher temperatures than what was normal a few years ago. Under heavy load 90 degree celcius on some cards is not a problem nor cause for concern. AMD for example runs very hot in comparison to older hardware and that's fine.
 
Just an extra little factoid, most on screen displays only show the the temps on the main chip anyway, these are the stock manufacturers settings for my card and you can see some parts of the card are running hotter than is being reported so If you are going to panic over temps anyway you may as well just shut the whole thing down as you don't really know the exact temps of every part of your system, you only know about the parts with sensors on them.
 

Attachments

  • 2021-06-18 18.56.51.png
    2021-06-18 18.56.51.png
    134.1 KB · Views: 77
  • 20210618185418_1.jpg
    20210618185418_1.jpg
    331.1 KB · Views: 63
I have to downclock my 1080ti otherwise it locks the pc. no other game does that. :) and reaching settlements it makes my hotas go nuts.
It makes my dogs howl,...

however maybe it's a good time to test the CPU and gpu loads and check when it gets hot? I changed my thermal past and it now steady 70C on the GPU side and 45C on the CPU side.

GTX1080OC
i74960X
32GB RAM
 
So Horizons pulls 327W to render 113 frames, and Odyssey pulls 298W to render 76.

That makes Horizons significantly less power-hungry per frame. If you cap both at 60fps, Horizons should run cooler.

That's a given.

Point was that Horizons is capable of loading hardware more heavily than Odyssey, if allowed to run unconstrained.

Of course, since drew is limiting frame rate, Odyssey is always going to be hotter, unless he's in an area where the Horizon client would be running near it's peak loads anyway.

i could use other words or terms, but these work just fine in this specific case.

Software utilizing the resources provided to it is not a fault or wrong.

Even if the game were well-optimized, load in unconstrained scenarios would not go down, it would probably go up. It's not the game's job to do what power limiters should be doing.

Should be noted that modern GPUs and CPUs can (and should) function at much higher temperatures than what was normal a few years ago. Under heavy load 90 degree celcius on some cards is not a problem nor cause for concern. AMD for example runs very hot in comparison to older hardware and that's fine.

This has more to do with how temperatures are reported than how hot things are actually getting.

AM4 platforms try to report actual on-die/junction temperatures, prior CPU temp readings were trying to simulate a tCASE (temperature of the top of the geometric center of the lid). The difference between the two is easily ~20C at the same true temperature. There was a similar trend with Intel platforms, before people realized how the on-die DTS worked and noted that every Intel part had a TJmax of 95-105C, even if they had tCASE limits in the 60s C, and that these limits were reached at the near the same temperature.

FD provide you with a whole bunch of settings to adjust how much strain you put on your computer parts.

The frame rate limiters are the only such settings. Barring a bottleneck somewhere, turning down settings doesn't reduce load, it increases frame rates.

Odyssey has all sorts of strange bottlenecks though.
 
Last edited:
High(ish) GPU utilization (and temperature) is not necessarily a bad sign, it can mean that the CPU can keep up to feed the GPU.
Then the mighty fps-killer NPC's arrive and the fps figures go down to crap levels along with the GPU utilization (and temperature):
Source: https://www.youtube.com/watch?v=31KblQkriR0

This is Horizons BTW.
Problem is that it has not always been this way. Some of the previous updates (probably the dreaded ARX update) screwed it up.
 
This turned out to be problem with Streamlabs OBS and EDO competing for encoding resources

This is interesting.

The 1660 has the same encoder hardware as the higher-end parts, which would make it a proportionally more significant factor in GPU power consumption and resource utilization, but I still would not have expected it to be the difference between thermally throttling and not. Does seem to have made enough difference in your case though.

Wouldn't have thought there was such a thing. Sounds interesting... 📖

There is generally no power or thermal headroom at stock voltages on reference or modest air-cooling. So, to overclock meaningfully, without radically augmenting cooling, you need to reduce the generally excessive GPU core voltage supplied, preferably to the minimum stable amount for the peak clocks one is targeting.

Out of the box this 6800 XT delivers 1150mV at load while my 3080 requests about 1062mV. I run the former at 1088mV and the latter at about 900mV. I keep power and temp limits in the same ballpark as stock, but I can get several hundred more MHz out of the respective GPUs at the reduced voltages while staying within those limits.

Of course, you can also reduce power, heat, and noise considerably, if not trying to push clocks further. There are some games where I've played for hours at my mining settings (which use about half the power of my gaming settings) before I noticed any performance issues that made me go back and check what profile I had loaded.
 
This is interesting.

The 1660 has the same encoder hardware as the higher-end parts, which would make it a proportionally more significant factor in GPU power consumption and resource utilization, but I still would not have expected it to be the difference between thermally throttling and not. Does seem to have made enough difference in your case though.



There is generally no power or thermal headroom at stock voltages on reference or modest air-cooling. So, to overclock meaningfully, without radically augmenting cooling, you need to reduce the generally excessive GPU core voltage supplied, preferably to the minimum stable amount for the peak clocks one is targeting.

Out of the box this 6800 XT delivers 1150mV at load while my 3080 requests about 1062mV. I run the former at 1088mV and the latter at about 900mV. I keep power and temp limits in the same ballpark as stock, but I can get several hundred more MHz out of the respective GPUs at the reduced voltages while staying within those limits.

Of course, you can also reduce power, heat, and noise considerably, if not trying to push clocks further. There are some games where I've played for hours at my mining settings (which use about half the power of my gaming settings) before I noticed any performance issues that made me go back and check what profile I had loaded.
Excuse the cheekiness, but you "know your onions" - could you recommend undervoltage settings for a 1080 and a 1080ti (different computers)
 
If you're using a Closed Liquid Loop for your 8700 Drew, you should see if maybe it's having permeation issues, and now the cooling isn't as effective. Given how your CPU isn't really working that hard, it shouldn't be hitting 80C. If you get a new PC in the nearish (GPU market is horrible) future, keep that PC and use it as a dedicated streaming rig though. Even if you get something really nasty like a Ryzen 5900X/5950X, it's always better to just have another capture PC, and you don't need a GPU in the capture PC with an Intel or CPU that has integrated graphics.

For the GPU, I can't really say, Odyssey is hard on the GPU, and I have to downsample to not bake bread in my PC case.
 
This is interesting.

The 1660 has the same encoder hardware as the higher-end parts, which would make it a proportionally more significant factor in GPU power consumption and resource utilization, but I still would not have expected it to be the difference between thermally throttling and not. Does seem to have made enough difference in your case though.
According to Nvidia, NVENC is a dedicated part of the GPU that isn't used for anything else. If that is true, it makes perfect sense that using NVENC adds to the GPU's total heat output.
 
Excuse the cheekiness, but you "know your onions" - could you recommend undervoltage settings for a 1080 and a 1080ti (different computers)

Every sample is a bit different, but the same basic procedure applies to any of the last several generations of NVIDIA hardware.

Get MSI AB, and instead of using any clock or voltage offsets, open up the frequency/voltage curve editor with CTRL+F. Pick a clock speed and voltage, then use shift+left click and drag to highight the entire curve after the voltage you've decided on as your limit, then click on of the points on the highlighted section of the graph and drag that entire section to some point below the clock speed of your chose voltage and 'apply'. Now you just adjust the points at your chosen voltage and lower.

For the 1080 Ti, 900mV @ 1900MHz is a good starting point (~90% of GP102 parts are sable here). As long as the highest point on the curve is 1900MHz, and that point is at 900mV, the card will not exceed that voltage or clock speed. Most 1080 Tis will be stable or close to stable here, but you'll need to do some stress testing to be sure (3DMark TimeSpy tends to crash out sooner than most tests and it's worth the five dollars so that you can loop it), and may have a bit more room to increase clocks, or reduce voltage...or you may need to pull back on clocks or allow the same 1900MHz to take a bit more voltage.

Do note that the curve offsets are heavily temperature dependent on these parts. If you adjust the curve while the card is cold, you will see lower clocks under load. Ideally, you tune the curve near peak temperature, with windowed 3D load running in the background to keep temps up and provide immediate feedback for performance/stability changes (I recommend Unigine Valley, for convience and ability to run at full speed when not in focus, it's not a particularly great stress test though).

The 1080 is close enough that the same numbers may also work, though it's a smaller GPU that will tend to clock a little higher at the same voltage.

A similar procedure will apply to every NVIDIA GPU from the 1xxx, 2xxx, or 3xxx lines (including the OP's 1660), though the specific clocks and voltages that are likely to be stable can vary.

According to Nvidia, NVENC is a dedicated part of the GPU that isn't used for anything else. If that is true, it makes perfect sense that using NVENC adds to the GPU's total heat output.

This is true, but it's power consumption is positively tiny relative to the rest of the GPU, at least at the high-end, and without overriding the reduced power state it forces the memory into, it's probably less than the power consumption of the last 500MHz of my GDDR5/6X OCs on my 1080 Ti or 3080. Same goes for AMDs VCN equivalent.

None of the GPUs I've personally tested appreciably increase in power consumption when encoding with their hardware encoders while gaming, even at fairly extreme settings (4k60 ~100Mbps HEVC, or H264 in the case of my older Pascal parts). it's just so small an addition relative to the rest of the GPU.

Slower card with a capped frame rate...it will be proportionally much more relevant, but the power/temp would still have had to have been fairly borderline for NVENC to be the final straw.
 

Deleted member 38366

D
Oddly, for me the Main Menu is the real killer.

Despite being 60fps, it is clearly on par with the visit of a concourse.

So whenever I take a break, I'm currently forced to avoid the Main Menu but instead drop into normal space somewhere within a System.
This is the only low GPU load scenario that'll make my GPU/CPU fans throttle down.

PS.
The case type I'm using has proven to be able to safely run 3 high-end GPUs for 24/7 GPGPU High Performance Computing (permanent 100% load on all GPUs) in the past.
Plenty of case fans and airflow ensure safe operation even during hot summers.
 
Last edited by a moderator:
The frame rate limiters are the only such settings. Barring a bottleneck somewhere, turning down settings doesn't reduce load, it increases frame rates.

Odyssey has all sorts of strange bottlenecks though.
You just said it yourself though, you can limit the frame rate and turn settings down thus reducing load on the GPU.

Note that i am not saying the game isn't broken just that it can be mitigated in some cases
 
Back
Top Bottom