Hardware & Technical Latest Nvidia Ampere Rumours

I can honestly say I have not seen any of latest gen cards from either team in stock at any of the vendors I normally buy from since they were launched.
 
This internal forum post from Overclockers (a UK supplier) is informative:

3090

Asus 3090 TUF: 39 backorders left - 16 due in (28 shipped so far)
Asus 3090 TUF OC: 3 back orders left - 20 due in (97 shipped so far)
Asus 3090 Strix: 64 back orders left (10 shipped so far)
Asus 3090 Strix OC: 243 back orders left - 50 due in (175 shipped so far)

Gigabyte 3090 Eagle - 1 back order (5 shipped so far)
Gigabyte 3090 Eagle OC - 1 back order (22 shipped so far)
Gigabyte 3090 Gaming OC - 6 back orders - 1 overdue (84 shipped so far)
Gigabyte 3090 Vision OC - 6 back orders - 2 overdue (37 shipped so far)
Gigabyte 3090 Master - 25 back orders - 9 overdue (32 shipped so far)
Gigabyte 3090 Extreme - 48 back orders - 9 overdue (21 shipped so far)

Inno3D 3090 Gaming X3 - 2 back orders (10 shipped so far)

PNY 3090 - 6 back orders (50 shipped so far)

3080

Asus 3080 TUF - 1379 back orders (152 shipped so far)
Asus 3080 TUF OC - 625 back orders (513 shipped so far)
Asus 3080 Strix - 312 back orders - 8 overdue (7 shipped so far)
Asus 3080 Strix OC - 860 back orders (131 shipped so far)

EVGA 3080 XC3 Black - 158 back orders (0 shipped so far)
EVGA 3080 XC3 Gaming - 14 back orders (0 shipped so far)
EVGA 3080 XC3 Ultra Gaming - 14 back orders (21 shipped so far)
EVGA 3080 FTW3 Gaming - 6 back orders (0 shipped so far)
EVGA 3080 FTW3 Ultra Gaming - 1 Back order (35 shipped so far)

Gigabyte 3080 Eagle OC - 618 back orders (186 shipped so far)
Gigabyte 3080 Gaming OC - 177 back orders - 10 overdue (236 shipped so far)
Gigabyte 3080 Master - 36 back orders - 8 overdue (137 shipped so far)
Gigabyte 3080 Extreme - 80 back orders - 4 overdue (57 shipped so far)

Inno3D 3080 Twin X2 OC - 47 back orders (41 shipped so far)
Inno3D 3080 iChill X3 - 21 back orders (33 shipped so far)
Inno3D 3080 iChill X4 - 42 back orders (15 shipped so far)

MSI 3080 Ventus 3X - 577 back orders (157 shipped so far)
MSI 3080 Gaming Trio - 761 back orders (307 shipped so far)

Palit 3080 Gaming Pro - 216 back orders (187 shipped so far)
Palit 3080 Gaming Pro OC - 44 back orders (66 shipped so far)

PNY 3080 - 12 back orders (13 shipped so far)

Zotac 3080 Trinity - 65 back orders (38 shipped so far)
Zotac 3080 Trinity OC - 66 back orders (175 shipped so far)
Zotac 3080 AMP HOLO - 2 back orders (106 shipped so far)
Zotac 3080 AMP HOLO EXTREME - 9 back orders (0 shipped so far)

3070

Asus 3070 Dual - 231 back orders left - 108 due in (121 shipped so far)
Asus 3070 Strix - 52 back orders (0 shipped so far)
Asus 3070 Strix OC - 1 back iorder - 40 due in (135 shipped so far)

KFA2 3070 - 22 back orders (24 shipped so far)

MSI 3070 Ventus 2X - 90 back orders (128 shipped so far)

Palit 3070 Gaming Pro OC - 23 back orders - 22 due in (48 shipped so far)
Palit 3070 Gamerock - 15 back orders - 7 due in (15 shipped so far)
Palit 3070 Gamerock OC - 2 back orders - 7 due in (16 shipped so far)
 
Ok .... So one of the sites has some cards:

Sapphire Radeon RX 6800 XT NITRO+ OC 16GB GDDR6 PCI-Express Graphics Card @ £859.99
PowerColor Radeon RX 6800 XT Red Devil 16GB PCl-Express Graphics Card @ £949.99
Gigabyte GeForce RTX 3070 Vision OC 8GB GDDR6 PCI-Express Graphics Card @ £679.99
MSI GeForce RTX 3070 Gaming X Trio 8GB GDDR6 PCl-Express Graphics Card @ £679.99
MSI GeForce RTX 3070 SUPRIM 8GB GDDR6 PCl-Express Graphics Card @ £698.99
MSI GeForce RTX 3060 Ti Gaming X Trio 8GB GDDR6 PCI-Express Graphics Card @ £559.99

and a few other makes/models.

Itching to buy the Sapphire ... But at £300 over the RRP of a reference card I am probbaly going to sit tight for the moment.
 

I guess you'd be interested how my card performs so far - I did settle for a gaming profile that runs stable (RDR2 or Flight Sim in VR), perhaps there is still more in it, but at this point I'm fully satisfied.

So at 931mV it runs at 1950 Mhz and 10002Mhz on the VRAM, during extended gaming sessions it barely hits 60C at less than 60% fan speed - cooling appears to be really good, as other cards seem to go up to 80C or more easily at similar load and settings. Timespy graphics score is 18595.

Fun fact, that with the 75hz monitor or in VR, I haven't experienced CPU bottleneck yet with the 7700k, not even in RDR2 (runs smooth without stuttering so far). In Flight Sim occasionally I'm mainthread limited, but given it's single thread performance-based I wouldn't be that much better off with a contemporary CPU. I guess AC Odyssey or Origins would however be fully CPU limited. But it holds up better than expected.
 
So at 931mV it runs at 1950 Mhz and 10002Mhz on the VRAM, during extended gaming sessions it barely hits 60C at less than 60% fan speed - cooling appears to be really good, as other cards seem to go up to 80C or more easily at similar load and settings. Timespy graphics score is 18595.

Graphics score is slightly lower than I'd expect. How are you verifying memory stability? Usually there is a significant range where you'll see performance stagnate or regress before actually seeing any artifacting or crashing as memory OC increases.

Fun fact, that with the 75hz monitor or in VR, I haven't experienced CPU bottleneck yet with the 7700k, not even in RDR2 (runs smooth without stuttering so far). In Flight Sim occasionally I'm mainthread limited, but given it's single thread performance-based I wouldn't be that much better off with a contemporary CPU. I guess AC Odyssey or Origins would however be fully CPU limited. But it holds up better than expected.

Both AMD's Zen 3 and Intel's new Rocket Lake parts could be expected to get at least 20% more lightly threaded performance (and much greater multithreaded, of course)...or more if you aren't able to clock your 7700K to 5GHz+.

Probably not worth a platform upgrade quite yet though, if the experience is good enough now.
 
Graphics score is slightly lower than I'd expect. How are you verifying memory stability? Usually there is a significant range where you'll see performance stagnate or regress before actually seeing any artifacting or crashing as memory OC increases.



Both AMD's Zen 3 and Intel's new Rocket Lake parts could be expected to get at least 20% more lightly threaded performance (and much greater multithreaded, of course)...or more if you aren't able to clock your 7700K to 5GHz+.

Probably not worth a platform upgrade quite yet though, if the experience is good enough now.

What would be a good score? I'm unsure if it is dragged down by the CPU or that I'm running an older driver (457.03).
As far as I've seen the memory clock speeds are stable and I've been in quite long Flight Sim sessions too. RDR2 with DX12 was unstable at times and it was odd as sometimes speed was reduced to 9500Mhz. Since I've swapped to Vulcan it is rock solid.

I'm eyeing an Alder Lake platform as an upgrade.
 
What would be a good score? I'm unsure if it is dragged down by the CPU or that I'm running an older driver (457.03).
As far as I've seen the memory clock speeds are stable and I've been in quite long Flight Sim sessions too. RDR2 with DX12 was unstable at times and it was odd as sometimes speed was reduced to 9500Mhz. Since I've swapped to Vulcan it is rock solid.

I'm eyeing an Alder Lake platform as an upgrade.

If it's actually pegged to 1950MHz throughout the run, you'd probably see a GPU score around 20k. CPU really shouldn't be limiting the GPU score noticeably. If you can't hold 1950MHz in Time Spy, the voltage curve needs tuning or the power limiter needs to be increased.

You can see if the memory is actually stable by benching it at various speeds while it's warm (fixed lowish fan speed) and watching for the score to stop increasing with clock speed, then reduce it slightly from there (as this is the point where EDC errors become noticeable).

Not sure why the memory would downclock in RDR2, unless the card is overheating, or you're doing something that utilizes CUDA. Any CUDA activity (including ShadowPlay or any other program that uses NVENC) will cause the card to switch to the P2 power state and knock 500MHz or so off the memory. This behavior can be overridden with NVIDIA Profile Inspector if it's an issue.
 
Even if optimal wanted to replace the memory--and had the tools and experience to do so--there aren't any better options for an RTX 3090. Micron is the only source for GDDR6X currently, and the RTX 3090 has the fastest bin currently produced.
 
If it's actually pegged to 1950MHz throughout the run, you'd probably see a GPU score around 20k. CPU really shouldn't be limiting the GPU score noticeably. If you can't hold 1950MHz in Time Spy, the voltage curve needs tuning or the power limiter needs to be increased.

You can see if the memory is actually stable by benching it at various speeds while it's warm (fixed lowish fan speed) and watching for the score to stop increasing with clock speed, then reduce it slightly from there (as this is the point where EDC errors become noticeable).

Not sure why the memory would downclock in RDR2, unless the card is overheating, or you're doing something that utilizes CUDA. Any CUDA activity (including ShadowPlay or any other program that uses NVENC) will cause the card to switch to the P2 power state and knock 500MHz or so off the memory. This behavior can be overridden with NVIDIA Profile Inspector if it's an issue.

Well, I went through the settings, and indeed it seems that DX12 controls GPU in a more liberal way. From a previous testing session I left the power slider somewhat higher, and I got spooked when I saw 350W peak GPU draw (GPU-Z board power draw) during Time Spy which I definitely don't want - reset to 100 and it went to 330W. Then Tried Unigine Heaven (DX11) and it stayed below 300W which was in-line with my previous findings.

Also found during Time Spy that at the higher power limit setting my clock speeds went up to 1980Mhz then later dropped to 1900. Under standard power limit my clock speed started lower but it was then stable at 1950Mhz. However, the result was 18k graphics score at lower power limit despite sustaining actually higher speeds during the second graphics test... odd?

Memory speed however was rock solid 10002Mhz in all scenarios.

I did some reading on scores and at these clock speeds 18k graphics score seems OK, people close to or above 19k clock way North of 2000 Mhz. Also found out that RAM speed affects GPU score a lot (I have 16gb DDR4 set to 3200Mhz). I wonder if it makes sense to take a second look at RAM and see if I can clock it higher?

In any case, I will now rather target another profile at 880mV and lower speed as I'm not comfortable with the power draw.
 
Well, I went through the settings, and indeed it seems that DX12 controls GPU in a more liberal way. From a previous testing session I left the power slider somewhat higher, and I got spooked when I saw 350W peak GPU draw (GPU-Z board power draw) during Time Spy which I definitely don't want - reset to 100 and it went to 330W. Then Tried Unigine Heaven (DX11) and it stayed below 300W which was in-line with my previous findings.

Also found during Time Spy that at the higher power limit setting my clock speeds went up to 1980Mhz then later dropped to 1900. Under standard power limit my clock speed started lower but it was then stable at 1950Mhz. However, the result was 18k graphics score at lower power limit despite sustaining actually higher speeds during the second graphics test... odd?

Memory speed however was rock solid 10002Mhz in all scenarios.

I did some reading on scores and at these clock speeds 18k graphics score seems OK, people close to or above 19k clock way North of 2000 Mhz. Also found out that RAM speed affects GPU score a lot (I have 16gb DDR4 set to 3200Mhz). I wonder if it makes sense to take a second look at RAM and see if I can clock it higher?

In any case, I will now rather target another profile at 880mV and lower speed as I'm not comfortable with the power draw.

DX12 doesn't control the GPU any differently in this regard, it's just that some tests are more demanding than others. Time Spy is up there, but still not as high as some real games; Unigine Heaven is well below what many actual games will pull. Default power limit for a 3090 is 350w and it's easy to hit that limit in some apps even with a significant undervolt. If you want to ensure it doesn't pull more than 300w (aside from unavoidable transients) you'll want to set the power limit to 86%.

Some of the reviews I've seen of the 3090 have 19-20k+ graphics score with an average GPU clock in the 1950MHz range. Guru3D, for example got 20144 with their 3090 FE sample, 20761 with their MSI Gaming Trio X, with only their Palit landing below 20k and only by a few points (all run at out-of-box settings). Guru3D was using a stock 9900k and 4x8GiB of DDR4-3600 that probably wasn't tuned particularly well.

I personally get 18174 graphics score on my brother's mediocre 3080 FE sample with a 1950MHz static curve and no memory OC, using a 4.275GHz Ryzen R7 3700X and 4x8GiB of well-tuned DDR4 3600.

System memory is only worth a couple hundred points in the GPU score, at best, unless something is really wrong somewhere and your 7700k shouldn't be a limiting factor in GPU score. Could be some overhead from stuff running in the background and you may want to get a score after a clean boot to see if that's the case, but this also should not make a large difference.

Are you using an actual F/V curve or just an offset?

Also, frequency of the curve is temperature dependent, so I generally manipulate it at the maximum operating temp I'm comfortable with, while running a fix fan speed and some load in the background.
 
DX12 doesn't control the GPU any differently in this regard, it's just that some tests are more demanding than others. Time Spy is up there, but still not as high as some real games; Unigine Heaven is well below what many actual games will pull. Default power limit for a 3090 is 350w and it's easy to hit that limit in some apps even with a significant undervolt. If you want to ensure it doesn't pull more than 300w (aside from unavoidable transients) you'll want to set the power limit to 86%.

Some of the reviews I've seen of the 3090 have 19-20k+ graphics score with an average GPU clock in the 1950MHz range. Guru3D, for example got 20144 with their 3090 FE sample, 20761 with their MSI Gaming Trio X, with only their Palit landing below 20k and only by a few points (all run at out-of-box settings). Guru3D was using a stock 9900k and 4x8GiB of DDR4-3600 that probably wasn't tuned particularly well.

I personally get 18174 graphics score on my brother's mediocre 3080 FE sample with a 1950MHz static curve and no memory OC, using a 4.275GHz Ryzen R7 3700X and 4x8GiB of well-tuned DDR4 3600.

System memory is only worth a couple hundred points in the GPU score, at best, unless something is really wrong somewhere and your 7700k shouldn't be a limiting factor in GPU score. Could be some overhead from stuff running in the background and you may want to get a score after a clean boot to see if that's the case, but this also should not make a large difference.

Are you using an actual F/V curve or just an offset?

Also, frequency of the curve is temperature dependent, so I generally manipulate it at the maximum operating temp I'm comfortable with, while running a fix fan speed and some load in the background.

Haha, mine is a 3080 too, not a 3090! Did I make a typo somewhere? :)
Then 18500 on Timespy at 1950Mhz is OK is guess.

Downclocked to ~1.9Ghz at 875mV, memory remains at 10002Mhz as before and after a session in RDR2 it seems to be OK while power draw was comfortably 250W though at only about 80% load. I left the power limit at 100 - should I decrease it, or the voltage/Mhz limit is enough to keep it well below 300W?

(again, thanks for the detailed responses, learning a a lot from your posts)
 
Haha, mine is a 3080 too, not a 3090! Did I make a typo somewhere? :)
Then 18500 on Timespy at 1950Mhz is OK is guess.

I had the 3090 in mind for some reason.

~18500 is where you should be for the clocks you mentioned on a 3080. Appologies for the confusion.

Downclocked to ~1.9Ghz at 875mV, memory remains at 10002Mhz as before and after a session in RDR2 it seems to be OK while power draw was comfortably 250W though at only about 80% load. I left the power limit at 100 - should I decrease it, or the voltage/Mhz limit is enough to keep it well below 300W?

The memory clock isn't dynamic and should only change in response to a different power state. The difficulty in assessing memory instability on modern cards is that they all have some form of error detection and will resend data that fails EDC/CRC checks, without actually changing clock speed. More errors mean less performance, without clocks or often even apparent stability being affected. That's why you need to find a very repeatable, VRAM sensitive, test and watch the scores it provides as memory speed is increased...as soon as performance stops going up, there are errors and the clock should be backed off slightly to avoid the chance for an error getting through eventually, as well as to not waste any of that precious power budget on memory cycles that aren't doing anything productive.

As far as power limit goes, if you have a power budget in mind, there is no harm in reducing the power limit to that. Most games aren't going to pull more and those few that do, well, you'd probably rather keep your PSU from failing than increasing performance a few percent.

The 3080 has a default limit of 320w, so 94% of that is 300w.
 
Then 18500 on Timespy at 1950Mhz is OK is guess.

Well if it's any comfort best I've been able to do in Time Spy is 17k. I have an EVGA 3080 ultra non-oc model. From what I've seen 16-17k seems to be close to normal but I feel like it can do better. But I've not really had a chance to tune anything yet. I've not reloaded windows either and I've upgraded motherboard, CPU and GPU. But I ddu the GPU and stayed in the AMD family for motherboard and CPU. B450 to B550. 3600x to 5800x. My specs:

5800x, B550, 32gb @3333mhz, EVGA 3080
 
Well if it's any comfort best I've been able to do in Time Spy is 17k. I have an EVGA 3080 ultra non-oc model. From what I've seen 16-17k seems to be close to normal but I feel like it can do better. But I've not really had a chance to tune anything yet. I've not reloaded windows either and I've upgraded motherboard, CPU and GPU. But I ddu the GPU and stayed in the AMD family for motherboard and CPU. B450 to B550. 3600x to 5800x. My specs:

5800x, B550, 32gb @3333mhz, EVGA 3080

Note that we've been talking about the GPU score, not the overall score, as the GPU score is largely system independent. A stock 3080 FE is tends to be just under 18k GPU score in Time Spy.

Also, EVGA sells about half a dozen 3080 SKUs with Ultra in the name somewhere and they can be quite different from eachother in cooling, PCB, and power limits. The XC3 variants actually tend to do slightly worse than the FE out of the box, but are one of the few strictly two-slot alternatives to the FE, are very quiet, and have fair memory/VRM cooling. The FTW3 cards have a much larger PCB with more robust power delivery and much larger/better coolers.
 
I had the 3090 in mind for some reason.

~18500 is where you should be for the clocks you mentioned on a 3080. Appologies for the confusion.



The memory clock isn't dynamic and should only change in response to a different power state. The difficulty in assessing memory instability on modern cards is that they all have some form of error detection and will resend data that fails EDC/CRC checks, without actually changing clock speed. More errors mean less performance, without clocks or often even apparent stability being affected. That's why you need to find a very repeatable, VRAM sensitive, test and watch the scores it provides as memory speed is increased...as soon as performance stops going up, there are errors and the clock should be backed off slightly to avoid the chance for an error getting through eventually, as well as to not waste any of that precious power budget on memory cycles that aren't doing anything productive.

As far as power limit goes, if you have a power budget in mind, there is no harm in reducing the power limit to that. Most games aren't going to pull more and those few that do, well, you'd probably rather keep your PSU from failing than increasing performance a few percent.

The 3080 has a default limit of 320w, so 94% of that is 300w.

Well, the mining profile is stable at about 830mV (225W power draw) with memory clocked at 9830 Mhz and about 92 MH/s. Not sure if it is 100% efficient, but at 10% more power getting an extra 2% memory speed should be of good scale. I will try to find benchmarks!

I have accidentally overwritten the reduced profile, so quickly did another at around 900mV and 1920 Mhz and way reduced power limit (I think 74?) that maxed the power consumption 275W-ish. It did maintain the clocks in the first test of Time Spy, while the second more demanding one it throttled back to 1850 Mhz - the score remained slightly above 18k. I will probably refine it a bit so that it gets more consistent, but I guess it should be OK as a long term gaming profile.

In the meantime I'm getting more concerned how I will keep clean the jumbo cooler on this thing. There is a dryer not far away and it blasts a lot of dust around.
So I'm trying to make it a positive pressure case by covering some of the non-filtered openings and just ordered a new ventilator I will install at the bottom where there is an intake with filter.

Well if it's any comfort best I've been able to do in Time Spy is 17k. I have an EVGA 3080 ultra non-oc model. From what I've seen 16-17k seems to be close to normal but I feel like it can do better. But I've not really had a chance to tune anything yet. I've not reloaded windows either and I've upgraded motherboard, CPU and GPU. But I ddu the GPU and stayed in the AMD family for motherboard and CPU. B450 to B550. 3600x to 5800x. My specs:

5800x, B550, 32gb @3333mhz, EVGA 3080

Morbad is the expert here, but the standard profile is so aggressive that these cards respond very well to reducing power as it keeps temps in check, hence there is some headroom tweaking them for better performance. I haven't checked/tested the standard configuration as I was afraid stock settings would overwhelm by puny 600W bronze rated PSU.
Also, for simple availability reasons I've got the top-of the range MSI Suprim X which has an oversized cooler (I even had to remove my SATA rack to mount it), and it seems it runs 20 degrees cooler than other mid-range designs. The weird market we're in, for some reason it basically cost as much as any other 3080 that rarely pops up, so at least I've got a 'good deal' and a tiny bit of peace of mind. :)
 
Well, the mining profile is stable at about 830mV (225W power draw) with memory clocked at 9830 Mhz and about 92 MH/s.

Most miners will use CUDA on NIVIDIA GPUs and tend to be memory limited so disabling the "force P2 state", or increasing the memory OC on that profile, is usually a good way to increase mining performance/efficiency.

In the meantime I'm getting more concerned how I will keep clean the jumbo cooler on this thing. There is a dryer not far away and it blasts a lot of dust around. So I'm trying to make it a positive pressure case by covering some of the non-filtered openings and just ordered a new ventilator I will install at the bottom where there is an intake with filter.

Just make sure you have more/stronger intake fans than exhaust and that they're all filtered. If the case is relatively open, it probably doesn't even need any exhaust fans, just front and bottom intakes.
 
Just make sure you have more/stronger intake fans than exhaust and that they're all filtered. If the case is relatively open, it probably doesn't even need any exhaust fans, just front and bottom intakes.

I have a large fan at the front bottom behind a removable filter, a smaller exhaust fan at the back (plus the CPU cooler vent that pushes air tiwrds the exhaust fan), I have another removable filter at the bottom where the PSU pulls the air and that's where I just put a 140mm fan.

I feel like the case is pulling less air from other, non-filtered openings, and now that this fan blows cool air towards the GPU, mining temps dropped from 55C to 52C.
 
Top Bottom