AMD FidelityFX CAS vs GFX card overheating

rootsrat

Volunteer Moderator
I wonder if anyone else has this issue:

Whenever I use AMD FidelityFX CAS SuperSampling, my gfx card seems to overheat to 100 degrees Celsius and subsequently my PC switches itself off. I tried it at 1.75, 1.50 and 1.25 and it always results in the above scenario. I did not dare to run it at 1.0 to risk it happening again 3 times is enough ;)

Anyone else experienced that?

Note when I use the built-in Supersampling I can run it at with no overheating issues, but the performance drop is too high at 1.50 and above, so my framerates drop down to unacceptable levels. I realise FidelityFX CAS is a sharpening tech, above all else, but it does include SS tech as well, and that seems to work much better than the Cobra engine one... apart from the overheating part.

My specs:

CPU - Intel Core i7 9700K @ 4.90 GHz
RAM - 32GB Corsair Vengeance LPX DDR4 3200MHz
GFX - GeForce RTX 3080 GAMING OC 10G
Storage - 1 TB Samsung 970 EVO Plus NVMe M.2
 
Hi,
I'm not using such a powerful system (i5 o/c to 4.2GHZ, RTX2060) but a couple of thoughts:-
1) I'm using Thundermaster to manage the GPU cooling - I've made the fan operation more aggressive than standard - this helps cool the GPU
2) I need to keep the fan filters on my Fractal case cleaned regularly.
3) My monitor is 60Hz refresh, accordingly, I've used the nVidia control panel to limit the GPU to 60fps - there's no advantage in getting the GPU to process more than the monitor can manage. Without the restriction, the GPU would crank out up to 120 fps (or more depending on the objects) and this would cause the cooling fans to spool up and the GPU temperatures to reach towards 200C.

Apart from th eobvious suggestions of making sure that the fans are running properly, no dust clogging - how is the airflow in the PC case? Are there cables obstructing the airflow around the GPU?
IIRC, the GPU is quite chunky - is there another expansion card in a neighbouring slot that could be moved?

HTH

Colin
 

rootsrat

Volunteer Moderator
Hi,
I'm not using such a powerful system (i5 o/c to 4.2GHZ, RTX2060) but a couple of thoughts:-
1) I'm using Thundermaster to manage the GPU cooling - I've made the fan operation more aggressive than standard - this helps cool the GPU
2) I need to keep the fan filters on my Fractal case cleaned regularly.
3) My monitor is 60Hz refresh, accordingly, I've used the nVidia control panel to limit the GPU to 60fps - there's no advantage in getting the GPU to process more than the monitor can manage. Without the restriction, the GPU would crank out up to 120 fps (or more depending on the objects) and this would cause the cooling fans to spool up and the GPU temperatures to reach towards 200C.

Apart from th eobvious suggestions of making sure that the fans are running properly, no dust clogging - how is the airflow in the PC case? Are there cables obstructing the airflow around the GPU?
IIRC, the GPU is quite chunky - is there another expansion card in a neighbouring slot that could be moved?

HTH

Colin
Thanks for the suggestions. I use Gigabyte software for fan control and limit the temp to 80 Celsius. The fans are doing great job keeping that temp under even the heaviest load in any game - with one exception, per my OP :)

I keep my PC clean and maintain it regularly. I'm a power user with lots of experience and rather savvy on all that stuff, I've been building my own PC's since forever from scratch. The airflow is quite optimal in the case and there is plenty of space and clearance, with proper cable management etc.

My screen is 144Hz. Limiting in game FPS to 60 doesn't help with that issue. CPU and GPU are not overclocked.
 

Attachments

  • 20230120_162939.jpg
    20230120_162939.jpg
    1.2 MB · Views: 71
Last edited:
You have a hardware problem on your gpu (related to cooler not doing his job, maybe a bad design of your custom graphic card)
 
Last edited:
I think it is symptomatic of when your graphic card is sollicited at 100% as the cooling is not efficient the T° goes up and up untill it crash
Maybe it's too much overclocked ?
 

rootsrat

Volunteer Moderator
I think it is symptomatic of when your graphic card is sollicited at 100% as the cooling is not efficient the T° goes up and up untill it crash
Maybe it's too much overclocked ?
It's not overclocked, well, not beyond the factory OC. When I use the built in SS even at 1.75, the temps are maxed to the limit I set up, the fans max out, but manage to keep it at 80 degrees.

With Fidelity FX even at 1.25 I have the issue. I'm pretty sure it's not the hardware, hence I've asked if anyone else using FFX CAS Supersampling has the same issue.

The hardware is stable, factory OC, it passes all benchmarks I throw at it, also very demanding games like Cyberpunk with RTX etc.

Believe me, I would have found out much sooner if it was a faulty card. As mentioned, I'm rather experienced user, including more advanced overclocking with voltage increases etc. I've done it all, aside from liquid nitrogen lol
 
Last edited:
I no longer have an RTX 3080 on hand, so out of curiosity I tested FedelityFX CAS and various supersampling multipliers on my 4090. CAS is slightly warmer than not, and relative to my usual settings 1.5x SS is hottest, with or without CAS.

Whenever I use AMD FidelityFX CAS SuperSampling, my gfx card seems to overheat to 100 degrees Celsius and subsequently my PC switches itself off.

The sharpening filter is a shader heavy post process effect in a shader heavy game on a GPU that is overweight on shader performance.

Anyone else experienced that?

I'm sure someone does, but it's not normal and is indicative of a serious cooling problem. That model of 3080 has a power limit low enough (~370w...correction 350w for the rev 1.0, 320w for the rev 2.0, at stock) that nothing like this should ever happen.

Note when I use the built-in Supersampling I can run it at with no overheating issues, but the performance drop is too high at 1.50 and above, so my framerates drop down to unacceptable levels. I realise FidelityFX CAS is a sharpening tech, above all else, but it does include SS tech as well, and that seems to work much better than the Cobra engine one... apart from the overheating part.

Probably because GA102 is normally fill rate limited in EDO and normal supersampling is running into a fill rate bottleneck before a shader one.

The hardware is stable, factory OC, it passes all benchmarks I throw at it, also very demanding games like Cyberpunk with RTX etc.

Do not confuse visual fidelity or performance with capacity to load hardware. Cyberpunk with RTX is considerably less demanding than Elite: Dangerous Odyssey on this architecture (and it's successor). If you want to see something really neat, try running Path of Exile with global illumination enabled. It's the only title I've ever heard of that I was not able to get to run without power throttling at reasonable clocks, on any of my RTX 3080s.

Some examples, in demanding GPU limited areas of games I've been playing recently:

ZD7NF4r.jpg

hmm7mNG.jpg

FfwJHt1.jpg

HFRkgAc.jpg

While not directly comparable to Ampere (though Lovelace is quite similar in overall balance), Cyberpunk 2077 wasn't really a worst case load on that architecture either.

Believe me, I would have found out much sooner if it was a faulty card.

A non-faulty card cannot be overheated to the point of shutdown through software. Unless you've cross flashed the card to something with higher power limit firmware, or have manipulated the fan curve, there shouldn't be anything that can get the card that hot before being throttled back by the power limiter. Something is probably wrong with the cooler or the TIM between it and the GPU or GDDR6X.

Another possibility is that transient power demands (which are bad on GA102) are triggering OCP on your power supply.

What does HWiNFO say about power and temps while playing EDO?

Edit: updated for clarity and with some examples.
 
Last edited:
Interesting problem. Some random ideas:

Do you get an iq improvements from amd + super sampling rather than “normal” or are you playing at 4k?

It could be your case? The teenage trick of taking off the side panel and pointing a real fan at it might provide more info.

I like the psu theory as well.

As far as I know, overlooking in nvidia is completely implemented by the chip itself, these days you’re only providing hints to the gpu unless you are customising the bios? Anyway the chip itself should be implementing its limits. Also from unfortunate laptop usage, when limits are hit, you should just see a severe permanent throttle, not a shutdown. Power supply?
 
For a dramatic example of an internal bottleneck that can influence power consumption in ways that are perhaps counter intuitive, I'll use FurMark at 1080p...

0x MSAA:
eulhoZb.png

686 fps, ~602w total board power.

4x MSAA:
ovn0qhp.png

464 fps, 479w.

8x MSAA:
Nxe6DfM.png

247 fps, 346w.

Mutli-sampling does nothing to increase shader load, and the GPU is shader limited without it, which is where we see the highest power consumption, because Lovelace, like Ampere before it, is a very shader heavy architecture.

Increasing multi-sampling dramatically increases fill rate demands, making raw pixel pushing power, not shader power, the bottleneck. Shader utilization falls proportionally with frame rate in this test, while the frame rate is capped by maximum pixel fill rate. So, as MSAA values are increased, frame rate and GPU power consumption fall vaguely linearly.

Note that GPU utilization is reported as 100% in each case, because every GPU cycle is doing something. This is why it's an internal bottleneck. There is nothing holding back performance here other than the GPU, it's just that different settings leverage different parts of the GPU.

Games and their settings do the same thing, though usually not so dramatically. Odyssey is a shader heavy title, making it well suited to loading GPUs with shader focused architectures. However, it's not normally completely shader limited on these architectures, rather it tends to be limited by the more modest pixel fill rate performance. So, settings that further load the shaders can further increase load, while settings that demand more fill rate tend to reduce load. Pure supersampling should do both, but there are also memory bandwidth constraints to worry about.

Not sure why CAS is enough to push rootsrat's card over the edge, but the point about it throttling, either from power or temperature, before it gets to the point anything has to shut off remains. Shutting off is not normal, not with any load. Even the most dramatic examples of 'power virus' loads are simply supposed to result in throttling.

I like the psu theory as well.

PSU might explain the shutdown, but not the temperatures. Card should have difficulty reaching 100C at stock power limits if the cooler is functioning correctly and should not shut off from temperature itself, if it's temperature limiter is functioning correctly.

As far as I know, overlooking in nvidia is completely implemented by the chip itself, these days you’re only providing hints to the gpu unless you are customising the bios? Anyway the chip itself should be implementing its limits. Also from unfortunate laptop usage, when limits are hit, you should just see a severe permanent throttle, not a shutdown.

It's possible to override limits to varying degrees, but you cannot turn off the temperature limiter. An RTX 3080 is supposed to start throttling by 88C GPU edge temp, 110C hotspot temp, or 110C memory temp.
 

rootsrat

Volunteer Moderator
I no longer have an RTX 3080 on hand, so out of curiosity I tested FedelityFX CAS and various supersampling multipliers on my 4090. CAS is slightly warmer than not, and relative to my usual settings 1.5x SS is hottest, with or without CAS.

The sharpening filter is a shader heavy post process effect in a shader heavy game on a GPU that is overweight on shader performance.

I'm sure someone does, but it's not normal and is indicative of a serious cooling problem. That model of 3080 has a power limit low enough (~370w...correction 350w for the rev 1.0, 320w for the rev 2.0, at stock) that nothing like this should ever happen.

Probably because GA102 is normally fill rate limited in EDO and normal supersampling is running into a fill rate bottleneck before a shader one.

Do not confuse visual fidelity or performance with capacity to load hardware. Cyberpunk with RTX is considerably less demanding than Elite: Dangerous Odyssey on this architecture (and it's successor). If you want to see something really neat, try running Path of Exile with global illumination enabled. It's the only title I've ever heard of that I was not able to get to run without power throttling at reasonable clocks, on any of my RTX 3080s.

Some examples, in demanding GPU limited areas of games I've been playing recently:

ZD7NF4r.jpg

hmm7mNG.jpg

FfwJHt1.jpg

HFRkgAc.jpg

While not directly comparable to Ampere (though Lovelace is quite similar in overall balance), Cyberpunk 2077 wasn't really a worst case load on that architecture either.



A non-faulty card cannot be overheated to the point of shutdown through software. Unless you've cross flashed the card to something with higher power limit firmware, or have manipulated the fan curve, there shouldn't be anything that can get the card that hot before being throttled back by the power limiter. Something is probably wrong with the cooler or the TIM between it and the GPU or GDDR6X.

Another possibility is that transient power demands (which are bad on GA102) are triggering OCP on your power supply.

What does HWiNFO say about power and temps while playing EDO?

Edit: updated for clarity and with some examples.

Interesting problem. Some random ideas:

Do you get an iq improvements from amd + super sampling rather than “normal” or are you playing at 4k?

It could be your case? The teenage trick of taking off the side panel and pointing a real fan at it might provide more info.

I like the psu theory as well.

As far as I know, overlooking in nvidia is completely implemented by the chip itself, these days you’re only providing hints to the gpu unless you are customising the bios? Anyway the chip itself should be implementing its limits. Also from unfortunate laptop usage, when limits are hit, you should just see a severe permanent throttle, not a shutdown. Power supply?
Thanks for the insights and the information!

So this is from a short trip between a megaship and a coriolis station, with docking, going to hangar and then back to the surface, then sitting for a bit there, facing the station entrance.
1674253958676.png


All settings are maxxed out, no FFX and no Supersampling.

Some ReShade shaders being processed too:
1674254020990.png


And some extra settings in the override file, courtesy of Morbad:

Code:
<?xml version="1.0" encoding="utf-8"?>
<GraphicsConfig>
  <HDRNode_Reference>
    <GlareCompensation>1.25</GlareCompensation>
  </HDRNode_Reference>
  <Planets>
    <Ultra>
      <TextureSize>8192</TextureSize>
      <AtmosphereSteps>7</AtmosphereSteps>
      <WorkPerFrame>512</WorkPerFrame>
    </Ultra>
  </Planets>
  <GalaxyBackground>
    <High>
      <TextureSize>4096</TextureSize>
    </High>
  </GalaxyBackground>
  <Bloom>
    <Ultra>
      <GlareScale>0.0225</GlareScale>
      <FilterRadius>1.0</FilterRadius>
      <FilterRadiusWide>3.5</FilterRadiusWide>
    </Ultra>
  </Bloom>
  <Envmap>
    <High>
      <TextureSize>1024</TextureSize>
      <NumMips>10</NumMips>
    </High>
  </Envmap>
  <GalaxyMap>
    <High>
      <HighResNebulasCount>3</HighResNebulasCount>
      <LowResNebulaDimensions>256</LowResNebulaDimensions>
      <HighResNebulaDimensions>1024</HighResNebulaDimensions>
      <LowResSamplesCount>64</LowResSamplesCount>
      <HighResSamplesCount>112</HighResSamplesCount>
      <MilkyWayInstancesCount>64000</MilkyWayInstancesCount>
      <MilkywayInstancesBrightness>1.0</MilkywayInstancesBrightness>
      <MilkywayInstancesSize>0.25</MilkywayInstancesSize>
      <StarInstanceCount>12000</StarInstanceCount>
    </High>
  </GalaxyMap>
  <Debris>
    <High>
      <DebrisLimit>2000</DebrisLimit>
    </High>
  </Debris>
  <Terrain>
    <UltraPlus>
      <BlendTargetsResolution>2048</BlendTargetsResolution>
      <WindVectorFieldResolution>2048</WindVectorFieldResolution>
    </UltraPlus>
  </Terrain>
  <Volumetrics>
    <Ultra>
      <DownscalingFactor>1.5</DownscalingFactor>
      <BlurSamples>3</BlurSamples>
    </Ultra>
  </Volumetrics>
  <GUIColour>
    <Default>
      <LocalisationName>Standard</LocalisationName>
      <MatrixRed>1, 0, 0</MatrixRed>
      <MatrixGreen>0, 1, 0</MatrixGreen>
      <MatrixBlue>0, 0, 1</MatrixBlue>
    </Default>
  </GUIColour>
</GraphicsConfig>

I have never had any issues with this card (hence my first thought was that there could be an issue with FidelityFX implementation). It sits at stock clocks and curves (factory OC'd) - I only increased Power Limit to 100% and locked the temps to max out at 80 Celsius.

I only overclocked it out of curiosity shortly after I bought it in April 2021, but it didn't prove to be a good OC card, I think the factory OC is the max it can achieve with the stock cooling and without tinkering with curves, maybe even new BIOS. I wasn't really interested, as it was powerful enough to cope with anything I threw at it.

The only time it actually switched off my system was when I was benchmarking OC on Frumark with some silly settings, but again it was so long ago I don't remember what they were at the time. Other than that it brushed off anything I threw at it, never exceeding 80 degrees.

Note I never took it apart actually. Maybe the thermal paste needs chaning? Thanks for testing the FidelityFX on your side @Morbad , I appreciate the effort. Another option could be the driver that causes this spike?

EDIT -- I play at 2560 x 1440 @ 144 Hz
 
Last edited:
The quick test I did comparing CAS and SS levels:
Source: https://www.youtube.com/watch?v=YdparDVZ6yc


So this is from a short trip between a megaship and a coriolis station, with docking, going to hangar and then back to the surface, then sitting for a bit there, facing the station entrance.
View attachment 342347

That looks like HWMonitor, which won't show hotspot or GDDR6X junction temps. HWiNFO or GPU-Z are much more useful for this.

I have never had any issues with this card (hence my first thought was that there could be an issue with FidelityFX implementation). It sits at stock clocks and curves (factory OC'd) - I only increased Power Limit to 100% and locked the temps to max out at 80 Celsius.

I only overclocked it out of curiosity shortly after I bought it in April 2021, but it didn't prove to be a good OC card, I think the factory OC is the max it can achieve with the stock cooling and without tinkering with curves, maybe even new BIOS. I wasn't really interested, as it was powerful enough to cope with anything I threw at it.

The only time it actually switched off my system was when I was benchmarking OC on Frumark with some silly settings, but again it was so long ago I don't remember what they were at the time. Other than that it brushed off anything I threw at it, never exceeding 80 degrees.

Note I never took it apart actually. Maybe the thermal paste needs chaning? Thanks for testing the FidelityFX on your side @Morbad , I appreciate the effort. Another option could be the driver that causes this spike?

With how variable software loads can be, you really need to go out and look for problems to be sure you won't be surprised by them later on. There is often little outward indicator of what is going to be a demanding load or not.

My personal GPU testing toolkit contains, at the very least:
  • Memtest Vulkan
  • Unigine Superposition
  • 3DMark
  • Path of Exile (almost nothing real-world hits shaders harder)
  • Quake II RTX
  • FurMark (mostly to test power and cooling limits)

Of those, only 3DMark costs money, but it's worth it as being able to loop Time Spy Extreme and Port Royal will reveal the overwhelming bulk of potential issues with a modern GPU. I also use any other games I have on hand, with Odyssey, Cyberpunk, and Mechwarrior 5 having a lot of diagnostic value.

Anyway, most RTX 3080s have issues with GDDR6X temps, either due to the cooler design (first gen GDDR6X was rather hot running, but the coolers didn't always take this into account, mostly being slightly upscaled copies of previous gen coolers), the thermal pads used, or both. I had to completely overhaul the TIM on my Gigabyte Master RTX 3080 because of the crap pads they used that bled silicone oil and shriveled up until they were barely making contact with the memory and that one was still better than the FE I had (which needed the whole cooler replaced to get usable memory temps). Anyway, even these parts, before any modifications, never shut down under any load; they just ramped up fan speed and throttled back heavily.

Out and out shutdowns are either a serious temp issue where the card can't throttle fast enough--usually only seen if there is a void in the GPU TIM or an improperly mounted heatsink--or power supply. The 3080/3090s can produce very high transient loads on the +12v rail, which can trip OCP on many power supplies that would otherwise be able to handle sustained load from the card without issue.

I recommend running some tests with HWiNFO open and keeping an eye on GPU to host spot deltas as well as GDDR6X temps.
 

rootsrat

Volunteer Moderator
I'm an idiot.

I only realised when I posted my last. The overrides file contains the values from your file @Morbad , which were too high for my system, but I was doing some more tinkering and forgot to change it back to my - MUCH less intensive - file.

This is with NO overrides at all, and FFX CAS set to 1.5, it contains the mem junction and hotspot temps. Scenario is just idling at a station on a landing pad.

1674261160030.png


It's still too high, but the system does not switch off now. I may look into thermal pads swap maybe... I've not really looked into that since I bought this card. Nevertheless, thanks for your help and advice all.

The problem was, as often in my case - between the keyboard and the chair ;)
 
Shouldn't have switched off even with the higher settings, but they could have been responsible for worse transients and more load across the PCI-E bus. I'd test system memory for stability as well.

As for temps, the GPU edge temp is fine, but a 20C+ delta between the GPU and hotspot, at those sort of load, is not. Memory, while not at the throttle temp yet, is still significantly higher than you want to be running for protracted periods.

Kit Guru's review of the Eagle OC (which uses the same board and cooler as your card) has a teardown, and those memory thermal pads are the same problematic ones that were on my Master OC. I'd strip it down, the clean off and replace all the TIM...GPU, VRM, and memory.

Ideally, you'd want some good thermal putty (which eliminates the need to carefully match pad thickness) for everything other than the GPU and a quality, high viscosity (most of the more liquid pastes don't last as long), paste for the GPU itself. If you can't find putty, pads will certainly work, but keep in mind that if they are improperly sized they will either not make contact, or will prevent other components from making proper contact. Just using the thickness already in place is questionable because most high-performance pads are not going to be as compressible as the junk Gigabyte is using.
 

rootsrat

Volunteer Moderator
Shouldn't have switched off even with the higher settings, but they could have been responsible for worse transients and more load across the PCI-E bus. I'd test system memory for stability as well.

As for temps, the GPU edge temp is fine, but a 20C+ delta between the GPU and hotspot, at those sort of load, is not. Memory, while not at the throttle temp yet, is still significantly higher than you want to be running for protracted periods.

Kit Guru's review of the Eagle OC (which uses the same board and cooler as your card) has a teardown, and those memory thermal pads are the same problematic ones that were on my Master OC. I'd strip it down, the clean off and replace all the TIM...GPU, VRM, and memory.

Ideally, you'd want some good thermal putty (which eliminates the need to carefully match pad thickness) for everything other than the GPU and a quality, high viscosity (most of the more liquid pastes don't last as long), paste for the GPU itself. If you can't find putty, pads will certainly work, but keep in mind that if they are improperly sized they will either not make contact, or will prevent other components from making proper contact. Just using the thickness already in place is questionable because most high-performance pads are not going to be as compressible as the junk Gigabyte is using.
Thanks for all the tips, appreciate it. I'll probably get the putty, I'm aware of the potential issues with pads being too thick.

At least I finally have the reason to play with this card some more!
 
to me, what I see in the game and then this tells me the exact same story as back in 1982.
someone created a visual that no existing hardware could manage. and no one ofc has any desire to create such a card because, why if we aren't NASA ????
And this IS what on foot action looks like to me.
Like the game is trying to animate 256 sprites when it only has the hardware capability to animate 16 sprites.
(NOT an Exaggeration)
and ofc in the mean time, cards overheat, melt, etc...not a short list of damage and serious expense this decision has caused.
and number ONE is soooo many people do have the recommended hardware,

I bet not even fdev has the hardware required to run efficiently. which would explain all the glitches we are currently experiencing in almost every aspect of the game.
killing itself by trying to display things we can't see anyway.

By efficiently, I mean in the way electronics tend to be designed.
a power supply for radio or television pumping out 50k watts needs to be pumping out 200k watts in order to not start fires and melt things. also allowing the power or rather the transformers and circuitry to operate at below 50% of max with no effort, no strain. very little maintenance costs.
(oversimplification)

yet as a comparison, this is a 25 watt card running at 250% above capability and it can't do it. never will, flames and slag are a given though.

or try powering the Empire State Building with a usb battery.


Again, thanks for the card data.
Nice to see some proof of the gargantuan banana peel that made it through design.


I don't have an amd, but I have lost 2 video cards so far, thanks to Odyssey.
so it took that loss for me to finally actually LIMIT my video card so it CAN"T melt. THIS should not be up to the user to decide, this should be IN THE GAME, to detect what max settings to use at run-time. NO User should have to waste time tweaking the visuals to change the temperature of the video card.
I have a used transmission I would like to install in your Ferrari.
 
Last edited by a moderator:
impressive read, thank you for this..



anyone remember the Atari GTIA chip?



this was entertaining and hilarious to read and see on a phone.



but the info is extreme and reminds me of all the other home computer manufaturers trying to keep up with the Atari designs that tbh cheated.

the use of extra on board graphic controller chips put them ahead of all other gamer pc's.



but the reminiscent bit is the 64 sprites and what was shown here so far.

that no computer except the atari could create the on screen action as well as Atari

Or, you could buy an arcade machine for $150k, that only plays that game with 128 sprites



which all points to idiots in the design department creating visuals that NO hardware other than seriously expensive custom cards couls manage, hence $150k game machines like Ms Pacman.

now in a box as small as any video card and can play any game, thanks to CUSTOM chips.



to me, what I see in the game and then this tells me the exact same story as back in 1982.

someone created a visual that no existing hardware could manage. and no one ofc has any desire to create such a card because, why if we aren't NASA ????

And this IS what on foot action looks like to me.

Like the game is trying to animate 256 sprites when it only has the hardware capability to animate 16 sprites.

(NOT an Exaggeration)

and ofc in the mean time, cards overheat, melt, etc...not a short list of damage and serious expense this decision has caused.

and number ONE is soooo many people do have the recommended hardware,



I bet not even fdev has the hardware required to run efficiently. which would explain all the glitches we are currently experiencing in almost every aspect of the game.

killing itself by trying to display things we can't see anyway.



By efficiently, I mean in the way electronics tend to be designed.

a power supply for radio or television pumping out 50k watts needs to be pumping out 200k watts in order to not start fires and melt things. also allowing the power or rather the transformers and circuitry to operate at below 50% of max with no effort, no strain. very little maintenance costs.

(oversimplification)



yet as a comparison, this is a 25 watt card running at 250% above capability and it can't do it. never will, flames and slag are a given though.



or try powering the Empire State Building with a usb battery.





Again, thanks for the card data.

Nice to see some proof of the gargantuan banana peel that made it through design.

on another analogy note



lets build a boat that can go faster by touching all the ocean surface at once. like TARDIS, it can and will be everywhere at once and nowhere at the same time, but can get anywhere in less than an instant because it is always everywhere.



point = design needs to be in touch with reality, always, or its for the comics.



I don't have an amd, but I have lost 2 video cards so far, thanks to Odyssey.

so it took that loss for me to finally actually LIMIT my video card so it CAN"T melt. THIS should not be up to the user to decide, this should be IN THE GAME, to detect what max settings to use at run-time. NO User should have to waste time tweaking the visuals to change the temperatureI of the video card.

I have a used transmission I would like to install in your Ferrari.
I have rarely seen so much nonsense in just one post.
 
I don't have an amd, but I have lost 2 video cards so far, thanks to Odyssey.
so it took that loss for me to finally actually LIMIT my video card so it CAN"T melt. THIS should not be up to the user to decide, this should be IN THE GAME, to detect what max settings to use at run-time. NO User should have to waste time tweaking the visuals to change the temperature of the video card.

Essentially every consumer GPU released since the Radeon HD 6000 series and the GTX 500 series (both from late 2010, so more than a dozen years at this point) has a power limiter. Cooling and power delivery are built around these limits, usually with significant margins worked in. If there is any application that can damage the card just by running on it at uncapped frame rates, without going out of one's way to bypass these limiters, that card is defective.

Elite Dangerous can be quite demanding on hardware, often far more so than many people think, but it's not the most demanding real-world application out there, on much of anything. If you've got cards that died while running Odyssey, these cards had defects you either didn't notice beforehand, or were exposed to conditions beyond their specifications (bad power, extreme ambient temperatures, a complete and utter lack of basic maintenance, overclocking/overvolting, etc).
 
Shouldn't have switched off even with the higher settings, but they could have been responsible for worse transients and more load across the PCI-E bus. I'd test system memory for stability as well.

As for temps, the GPU edge temp is fine, but a 20C+ delta between the GPU and hotspot, at those sort of load, is not. Memory, while not at the throttle temp yet, is still significantly higher than you want to be running for protracted periods.

Kit Guru's review of the Eagle OC (which uses the same board and cooler as your card) has a teardown, and those memory thermal pads are the same problematic ones that were on my Master OC. I'd strip it down, the clean off and replace all the TIM...GPU, VRM, and memory.

Ideally, you'd want some good thermal putty (which eliminates the need to carefully match pad thickness) for everything other than the GPU and a quality, high viscosity (most of the more liquid pastes don't last as long), paste for the GPU itself. If you can't find putty, pads will certainly work, but keep in mind that if they are improperly sized they will either not make contact, or will prevent other components from making proper contact. Just using the thickness already in place is questionable because most high-performance pads are not going to be as compressible as the junk Gigabyte is using.
Signed in just to thank you for clueing me into re-pasting my GPU. My 6700 XT was showing 20C difference between hotspot and core. While it wasn't exceeding crazy temps, after the re-paste it's only a difference of about 8-10C at most vs 18-25C before.
 

rootsrat

Volunteer Moderator
OK, so an update on this, in case anyone bothers :D

I have replaced the thermal pads on my RTX card. Used Kritical 20 W/mK pads and these are the results:

1675640718594.png


Temp1 is GPU, 2 is mem junction and 3 is hotspot.

Top stats are for Elite running on all maxxed out settings, with some extra modded config above ultra, with unlocked FPS and also a 1080p Twitch stream running.

Previously that would result in 108 Celsius on the hotspot.

Bottom are the same as above and additionally: slight OC on GPU and VRAM and with Fidelity FX CAS Supersampling on 1.5 (sharpening reduced to 0).

Previously it would result in the whole PC shutting down suddenly.
 
Back
Top Bottom