Hardware & Technical Latest Nvidia Ampere Rumours

I got lucky(?) on the Newegg shuffle and snagged an Asus TUF O24G.

One of the better AIB models.

I get wanting to do the work yourself for a timely fix, but this is a defect that should be replaced under warranty.

It's a defect that, if I tried to get replaced under warranty, would result in a potentially indefinite turnaround time just to get another card with the same defect. They have newer revisions of the card, but I don't think they changed thermal pads, and unless I want to waste my time trying to wring a refund out of them (I don't, 3080 availability is still not good and is not going to improve for the unlocked parts, plus I know the underlying hardware is fine) I'm better off keeping the card and fixing things myself.

This is the unfortunate reality with a large portion of hardware. Obvious design flaws or defects that manufacturers ignore because most consumers won't notice them, or won't find worthwhile to seek service for.
 
It's a defect that, if I tried to get replaced under warranty, would result in a potentially indefinite turnaround time just to get another card with the same defect.
They should in theory have some buffer cards for replacement. However, there are scams - for example I heard about a case in Poland where the vendor said "it cannot be repaired, we will give you your money back". Meaning, money paid before this poopstorm happened. So a total scam, because that card is worth 3x more now. IDK how that ended, the person said he will sue.
 
One of the better AIB models.



It's a defect that, if I tried to get replaced under warranty, would result in a potentially indefinite turnaround time just to get another card with the same defect. They have newer revisions of the card, but I don't think they changed thermal pads, and unless I want to waste my time trying to wring a refund out of them (I don't, 3080 availability is still not good and is not going to improve for the unlocked parts, plus I know the underlying hardware is fine) I'm better off keeping the card and fixing things myself.

This is the unfortunate reality with a large portion of hardware. Obvious design flaws or defects that manufacturers ignore because most consumers won't notice them, or won't find worthwhile to seek service for.

Plus I recon the self-upgrade is anyway vastly superior to the standard solutions, especially if you are mining with the card.
Over here availability is improving a lot, but pricing is barely lower than the peak highest point. 3090s still go for close to $4k, 3080s barely cheaper than 3080tis and still cost $2.5-3k.

On the sidenote, I may postpone my upgrade plans on the rest (CPU, mobo and RAM) as the only application where those are a bottleneck is MS Flight Sim, but on that an upcoming update should essentially cut the load in half. It is so funny that I'm reading for years now that 4C/8T CPUs are obsolete, yet there is no real gaming application (competitive FPS aside) where it causes tangible bottlenecks.
 

That is exactly what happened to the thermal pads on my Gigabyte Arorus RTX 3080 Master.

I've been busy with other stuff, but I've stripped the card and cleaned off a stubborn layer of flux residue (Gigabyte is cheaping out on their PCB wash...probably used the same bucket for a hundred different boards between solvent changes) and am about to underfil the GPU and GDDR6X before replacing the stock pads with some Arctic ones I had laying around and using liquid metal on the GPU. Leaving off the fan/RGB shroud entirely and mounting three of my own 92mm fans to the fin stack of the stock cooler.
 
Ridiculous considering the price, they should send people with feathers to wave and circulate air in your room at those prices
 
  • Like (+1)
Reactions: EUS
Ridiculous considering the price, they should send people with feathers to wave and circulate air in your room at those prices

Yeah, this generation of parts has more example of cutting corners that any I can recall. NVIDIA and their AIBs really shouldn't have to nickel and dime people to save a dollar on a cluster of SMD capacitors, substandard fan control/monitoring controller implementations, or thermal interface materials. The difference between that they've done (which is no doubt significantly inflating failure rates) and the best that could reasonably be done is maybe twenty bucks on the BoM of a $1000+ card.
 

That is exactly what happened to the thermal pads on my Gigabyte Arorus RTX 3080 Master.

I've been busy with other stuff, but I've stripped the card and cleaned off a stubborn layer of flux residue (Gigabyte is cheaping out on their PCB wash...probably used the same bucket for a hundred different boards between solvent changes) and am about to underfil the GPU and GDDR6X before replacing the stock pads with some Arctic ones I had laying around and using liquid metal on the GPU. Leaving off the fan/RGB shroud entirely and mounting three of my own 92mm fans to the fin stack of the stock cooler.

Are there any signs that show my card is also subject of issues? I really don't want to dismantle it - I'd actually need to be a set of tools first.
 
Are there any signs that show my card is also subject of issues? I really don't want to dismantle it - I'd actually need to be a set of tools first.

I'd expect most MSI GDDR6X equipped parts to be using similar VRAM pads.

Signs of a problem would be high GDDR6X or VRM temperatures, especially if they've been increasing over time.
 
Convinced my brother to watercool his 3080FE and I just finished setting it up.

With good thermal pads (which were a major pain to install, as I over estimated the thickness required and had to make a few modifications to increase mounting pressure to obscene levels to get these harder pads to adequately compress) GDDR6X temps are a solid ~50C lower than before.

Prior to the new cooler and TIM mining on this FE was essentially impossible, as the memory was near the 110C throttle point even with -216MHz on the memory clock. With the new cooling the GDDR6X is only 70C at +1406MHz which gives me about 105MH/s in ethash at 238W with the GPU at 1680MHz with 762mV. I think I still have a bit of headroom to go on this card.

yAgfjb4.png


With a little more work I think I can get ~450kH per watt out of it at ~110MH/s.

I also had the opportunity to play with an EVGA 3080 FTW3 LHR. Fairly impressed with it as a gaming card as it's cooler is quite good and it has some of the coolest GDDR6X temps of any stock Ampere part I've seen. Clocks well too and is competitive with the water cooled FE, mostly thanks to it's 400w power limit. Doesn't have the EVGA fan controller bug (LHR firmware is too new) and the VRM shouldn't have the solder issues that caused some earlier boards to fail under certain loads. Not much of an ETH miner being an LHR card, of course.

Still procrastinating on getting my Gigabyte 3080 up and running, but now that the weather is cooling off I have some more incentive to.
 
They should in theory have some buffer cards for replacement. However, there are scams - for example I heard about a case in Poland where the vendor said "it cannot be repaired, we will give you your money back". Meaning, money paid before this poopstorm happened. So a total scam, because that card is worth 3x more now. IDK how that ended, the person said he will sue.

Well a refund is exactly that, the money you paid for an item, not current market value, that's not a scam, that's just a vendor complying with the terms of a refund. You think they should give him 3 times what he paid when getting a refund? That's a scam!
 
Well a refund is exactly that, the money you paid for an item, not current market value, that's not a scam, that's just a vendor complying with the terms of a refund. You think they should give him 3 times what he paid when getting a refund? That's a scam!
Yeah, when you get a refund you het the exact money you paid back. Pretty bizarre to expect more. Doubt he would have accepted less if the prices would have dropped by now...
 
Convinced my brother to watercool his 3080FE and I just finished setting it up.

With good thermal pads (which were a major pain to install, as I over estimated the thickness required and had to make a few modifications to increase mounting pressure to obscene levels to get these harder pads to adequately compress) GDDR6X temps are a solid ~50C lower than before.

Prior to the new cooler and TIM mining on this FE was essentially impossible, as the memory was near the 110C throttle point even with -216MHz on the memory clock. With the new cooling the GDDR6X is only 70C at +1406MHz which gives me about 105MH/s in ethash at 238W with the GPU at 1680MHz with 762mV. I think I still have a bit of headroom to go on this card.

yAgfjb4.png


With a little more work I think I can get ~450kH per watt out of it at ~110MH/s.

I also had the opportunity to play with an EVGA 3080 FTW3 LHR. Fairly impressed with it as a gaming card as it's cooler is quite good and it has some of the coolest GDDR6X temps of any stock Ampere part I've seen. Clocks well too and is competitive with the water cooled FE, mostly thanks to it's 400w power limit. Doesn't have the EVGA fan controller bug (LHR firmware is too new) and the VRM shouldn't have the solder issues that caused some earlier boards to fail under certain loads. Not much of an ETH miner being an LHR card, of course.

Still procrastinating on getting my Gigabyte 3080 up and running, but now that the weather is cooling off I have some more incentive to.

Mine performs very differently, the memory cannot sustain more than about 10.2Ghz or 93ish Mh/s, and in normal mining I set it to 9550Mhz only, or ~86 Mh/s @ ~199W (430 kH/W). That allows for memory junction temps at 96C or below, if I notice it goes above I am rather shutting it down.
Gaming profile however is different with the GPU easily sustainining 2100 Mhz - it may even go higer but I haven't tried it is I guess I'm at diminishing returns. This profile is only used for VR sim, and even when it picks up 380W the GPU never goes above 59C, and at temperature the fans are below 90% too.
 
Mine performs very differently, the memory cannot sustain more than about 10.2Ghz or 93ish Mh/s, and in normal mining I set it to 9550Mhz only, or ~86 Mh/s @ ~199W (430 kH/W). That allows for memory junction temps at 96C or below, if I notice it goes above I am rather shutting it down.
Gaming profile however is different with the GPU easily sustainining 2100 Mhz - it may even go higer but I haven't tried it is I guess I'm at diminishing returns. This profile is only used for VR sim, and even when it picks up 380W the GPU never goes above 59C, and at temperature the fans are below 90% too.

All the 3080s I've been playing with are too power limited for me to try for much more than 2010-2040MHz on the core in the gaming profiles I setup. I haven't flashed any higher power limit firmware from other cards or done any shunt mods as the cooling, even on the water cooled part (which only has a 240 rad on it), would rapidly become a limit regardless.

This FE is capped at 370w with the slider at it's +15% maximum and still hits that limit in Cyberpunk (what I'm using as a baseline as it's near, but not quite at, the top of gaming power consumption) around 2025MHz with 950-975mV.

The EVGA FTW3 is much the same...it's got a higher 400w power limit, but also runs a bit warmer due to being air cooled, which mostly negates any potential clock speed advantage. More than 1 volt is counter productive and that limits the practical core clock to around 2040MHz.

I've spent the least time tuning for gaming on my own Gigabyte 3080, but since I'm putting near silent fans on it, I don't think I'll bother with higher power firmware on it either, leaving it limited to 375w.

All of the samples I have access to are pretty solid memory over clockers. This particular FE continues to impress; I'm up to +1730MHz stable on the memory, so far. It's pulling ahead of the other cards and my overclocked 6800 XT at 4k largely because of how fast I can run the GDDR6X. Mining wise I'm up to 106MH/s at 225w on it (470+ kH/W), with 68-70C on the memory junction temp.
 
All the 3080s I've been playing with are too power limited for me to try for much more than 2010-2040MHz on the core in the gaming profiles I setup. I haven't flashed any higher power limit firmware from other cards or done any shunt mods as the cooling, even on the water cooled part (which only has a 240 rad on it), would rapidly become a limit regardless.

This FE is capped at 370w with the slider at it's +15% maximum and still hits that limit in Cyberpunk (what I'm using as a baseline as it's near, but not quite at, the top of gaming power consumption) around 2025MHz with 950-975mV.

The EVGA FTW3 is much the same...it's got a higher 400w power limit, but also runs a bit warmer due to being air cooled, which mostly negates any potential clock speed advantage. More than 1 volt is counter productive and that limits the practical core clock to around 2040MHz.

I've spent the least time tuning for gaming on my own Gigabyte 3080, but since I'm putting near silent fans on it, I don't think I'll bother with higher power firmware on it either, leaving it limited to 375w.

All of the samples I have access to are pretty solid memory over clockers. This particular FE continues to impress; I'm up to +1730MHz stable on the memory, so far. It's pulling ahead of the other cards and my overclocked 6800 XT at 4k largely because of how fast I can run the GDDR6X. Mining wise I'm up to 106MH/s at 225w on it (470+ kH/W), with 68-70C on the memory junction temp.

I guess the Suprim X BIOS is really good in this regard, it can sustain 2100 Mhz regardless of load (I might try to add another 50-100 out of curiosity), but even at 'unlimited' power my memory just cannot sustain higher clocks, at 10500 it crashes fairly quicly to black screen.
 
I guess the Suprim X BIOS is really good in this regard, it can sustain 2100 Mhz regardless of load (I might try to add another 50-100 out of curiosity), but even at 'unlimited' power my memory just cannot sustain higher clocks, at 10500 it crashes fairly quicly to black screen.

The Suprim X should allow 430w with maximum power limit slider. With any sensible F/V curve most games won't throttle with that sort of limit, but a few should...Path of Exile with global illumination enabled, for example.
 
Had to sort out a really bizarre stability issue with my wife's new workstation...memory passed hundreds of loops of memtest86, days of testmem5, and craptons of OCCT without issue, but the system became unusable when I moved my 3080 back into it and installed drivers. Thought I had broken the card at first, but everything ran fine at firmware defaults. Wasn't ReBAR, FCLK, or LCLK DPM either, but the actual MCLK/UCLK (memory/memory controller clock) when used in conjunction with the card. IOD on this 3950X has to be a bit weak...can't drive a memory clock much past 1733MHz(3466MT/s) with 128GiB of RAM installed along side a PCI-E 16x 4.0 GPU. Memory itself was stable, as was the memory controller when the PCI-E controller was mostly idle, but heavily load them together and something gives. Backed off to DDR4-3466, which seems stable, but it will probably be another week or two before I'm confident enough in it for it to be allowed to do any real work. Also corrupted the snot out of my Windows install while diagnosing things, so it's got to be wiped when I'm done testing. I'll have to find some good Linux GPU tests so I can screen for this sort of stuff in a more disposable environment. Miners won't work for this as mining involves almost zero PCI-E traffic.

Anyway, this Aorus RTX 3080 Master rev.1 is a pretty average board, but the heatsink on it is exceptional. I've got a trio of Arctic 92mm fans screwed together and attached to an I/O cover adapter to hold them over the bare heatsink of the GPU. I haven't put a shroud on it or anything, so it's just three off-center fans suspended about half a slot from the top of the bare heatsink. Was planning on this being a temporary solution until I got some higher-end fans and attached them directly to the card, but I'm so impressed with the temperatures that I'll probably just leave it. I've applied underfil to the GPU and GDDR6X and completely replaced all TIMs--liquid metal on the GPU itself, higher-end Gelid pads on the GDDR6X and a mix of mid-range pads or thermal paste on the VRM and between the reverse of the PCB and the backplate. Some of the pads I used were a bit too thick, but I got around this with some brute force mounting (counter)pressure. GDDR6X temps rival water cooled cards and the GPU is not far behind...with the GPU fans running at a relatively quiet ~1600rpm and the case fans (Artic P12s) at a very tolerable 1200-1300rpm.

Memory on this 3080 maxes out at +650 (almost 1100 less than my brother's now watercooled FE) and the official firmware has a power limit of a measly 350w, but that's still enough for 1950-2010MHz in games, and isn't any barrier to mining, of course. Might flash it and upgrade the cooling further at some point, but I don't think my wife is gaming enough on her workstation for the extra 2-3% in performance I could squeeze out of it to justify the extra work and another ~100w in power consumption. Mining wise, it does ~96MH/s at about 224w...so a fair bit less efficient than the watercooled FE, but not bad. GDDR6X tops out at 76C while mining, with the GPU at 50C/60C hotspot. During gaming, more GPU dependent loads GPGPU loads, or anything else that pushes up against the 350w power limit, the GPU gets about 10-12C hotter, while maintaining that same 10C differential between the edge and hotspot temp.
 
Had to sort out a really bizarre stability issue with my wife's new workstation...memory passed hundreds of loops of memtest86, days of testmem5, and craptons of OCCT without issue, but the system became unusable when I moved my 3080 back into it and installed drivers. Thought I had broken the card at first, but everything ran fine at firmware defaults. Wasn't ReBAR, FCLK, or LCLK DPM either, but the actual MCLK/UCLK (memory/memory controller clock) when used in conjunction with the card. IOD on this 3950X has to be a bit weak...can't drive a memory clock much past 1733MHz(3466MT/s) with 128GiB of RAM installed along side a PCI-E 16x 4.0 GPU. Memory itself was stable, as was the memory controller when the PCI-E controller was mostly idle, but heavily load them together and something gives. Backed off to DDR4-3466, which seems stable, but it will probably be another week or two before I'm confident enough in it for it to be allowed to do any real work. Also corrupted the snot out of my Windows install while diagnosing things, so it's got to be wiped when I'm done testing. I'll have to find some good Linux GPU tests so I can screen for this sort of stuff in a more disposable environment. Miners won't work for this as mining involves almost zero PCI-E traffic.

Anyway, this Aorus RTX 3080 Master rev.1 is a pretty average board, but the heatsink on it is exceptional. I've got a trio of Arctic 92mm fans screwed together and attached to an I/O cover adapter to hold them over the bare heatsink of the GPU. I haven't put a shroud on it or anything, so it's just three off-center fans suspended about half a slot from the top of the bare heatsink. Was planning on this being a temporary solution until I got some higher-end fans and attached them directly to the card, but I'm so impressed with the temperatures that I'll probably just leave it. I've applied underfil to the GPU and GDDR6X and completely replaced all TIMs--liquid metal on the GPU itself, higher-end Gelid pads on the GDDR6X and a mix of mid-range pads or thermal paste on the VRM and between the reverse of the PCB and the backplate. Some of the pads I used were a bit too thick, but I got around this with some brute force mounting (counter)pressure. GDDR6X temps rival water cooled cards and the GPU is not far behind...with the GPU fans running at a relatively quiet ~1600rpm and the case fans (Artic P12s) at a very tolerable 1200-1300rpm.

Memory on this 3080 maxes out at +650 (almost 1100 less than my brother's now watercooled FE) and the official firmware has a power limit of a measly 350w, but that's still enough for 1950-2010MHz in games, and isn't any barrier to mining, of course. Might flash it and upgrade the cooling further at some point, but I don't think my wife is gaming enough on her workstation for the extra 2-3% in performance I could squeeze out of it to justify the extra work and another ~100w in power consumption. Mining wise, it does ~96MH/s at about 224w...so a fair bit less efficient than the watercooled FE, but not bad. GDDR6X tops out at 76C while mining, with the GPU at 50C/60C hotspot. During gaming, more GPU dependent loads GPGPU loads, or anything else that pushes up against the 350w power limit, the GPU gets about 10-12C hotter, while maintaining that same 10C differential between the edge and hotspot temp.

What do you mean by "over the bare heatsink"? Does it have a heatsink on the backplate, or you put the fans sideway of the card?
BTW as an easy upgrade, would it make sense for me to remove the backplate and put thermal pads where the VRAM is then putting back the backplate?

Otherwise the max OC on this memory is very similar to mine. A bit disappointing, but it is what it is.
 
Back
Top Bottom