Hardware & Technical Latest Nvidia Ampere Rumours

hmmmm .... If you are after Ray tracing this article suggests that a 30x0 is a better choice than a 6x00 , although it doesnt say which 6x00 card was tested


They don't know which card ran the test, or which settings were used. :/ Nvidia offering better RT was expected but this tells us... Little.
 
I think what it boils down to is that for most use cases raw power is at diminishing returns now, so a few percent here and there won't mean much. All tests are showing ridiculous FPS.

So two things matter IMHO:
  • Who can provide decent availability
  • Who can show better exclusive games with real benefits for their fancy new techs

The top end is always way past diminishing returns, unless there is no chance of competition. Whenever there are vaguely compeitive products, no one leaves any low hanging fruit, and most performance/power margins are quite thin.

There aren't many meaningful exclusive features, fortunately, and I am actively turned off by attempts at exclusion.

hmmmm .... If you are after Ray tracing this article suggests that a 30x0 is a better choice than a 6x00 , although it doesnt say which 6x00 card was tested


Most of the leaks out there seem to have featured the 6800XT, which makes sense as the 6900XT is coming 20 days later and in much lower volumes. These results are in line with the 3DMark Port Royal leaks.

As Ian mentions, Ampere having superior ray tracing is expected, but how this translates into actual game performance is unknown. A pure RT or RT focused benchmark will surely show more a discrepancy than most games run at playable settings.

This gap does provide opportunity for NVIDIA's marketing department either way. If they can make RDNA2 look worse than it is by overemphasizing ray tracing, they will, even if hurts their own performance.
 
I also wonder what it will mean for optimisation that both new consoles are Navi based. I can imagine that may make it more tempting to widely adopt their counter to dlss.
 
more info re the new AMD cards, specifically around features and how they have worked together on ryzen and rdna2 when designing the products. Also shows for those interested that they have partnered with EKB for waterblocks.

Whats really striking is the size difference between these cards and the NVIDIA equivalents. The 3090 in particular is a monster whereas the 6900xt is standard 26 cms in length.

 
hmmmm .... If you are after Ray tracing this article suggests that a 30x0 is a better choice than a 6x00 , although it doesnt say which 6x00 card was tested

edit: Which ties in with the Port Royal synthetic benchmark leaks.


Mixed bag for me. For most of my use cases, I don't need a new GPU at 1440p/75hz - It's essentially VR with Flight Sim. So at least good ray tracing is a must for me, especially as I think a lot of games will adopt it.

The top end is always way past diminishing returns, unless there is no chance of competition. Whenever there are vaguely compeitive products, no one leaves any low hanging fruit, and most performance/power margins are quite thin.

There aren't many meaningful exclusive features, fortunately, and I am actively turned off by attempts at exclusion.

These days it feels different, as monitors go for stupid resolutions and refresh rates to keep more GPU power relevant. See Nvidia's foolish push into 8k as for that you really need power and DLSS. But nobody has/needs 8k and for most monitors, in legacy games, a 3070 is already an overkill - Nvidia was smart with the previous gen identifying that for mainstream monitors a 1080/ti performance is all you need for the next couple of years.
They had an advantage but with console ray tracing support, ports to PC will be easy I guess so AMD can quickly leapfrog them. What's interesting are the strategic partnerships like CD Projekt and Epic with Nvidia, others with AMD - almost like on consoles.
 
I also wonder what it will mean for optimisation that both new consoles are Navi based. I can imagine that may make it more tempting to widely adopt their counter to dlss.

DLSS has always been an extreme niche feature because simpler implementations have proven to offer comparable advantages, and there are always trade offs to upsampling/sharpening vs. rendering at native resolution.

That said, NVIDIA still has a commanding lead in PC marketshare and there are far more games with NVIDIA logos than AMD logos, so NVIDIA will be able to push DLSS for a while yet.

These days it feels different, as monitors go for stupid resolutions and refresh rates to keep more GPU power relevant. See Nvidia's foolish push into 8k as for that you really need power and DLSS. But nobody has/needs 8k and for most monitors, in legacy games, a 3070 is already an overkill

My displays are 1440p144/165 and 4k60, and I even in many legacy titles, I would want more performance than I could get out of a 3090 or 6900XT. There are usually IQ meaningful settings I can jack up. Elite: Dangerous and Path of Exile are both old, and are both completely GPU limited on a 3080 on settings that I actually use.

They had an advantage but with console ray tracing support, ports to PC will be easy I guess so AMD can quickly leapfrog them. What's interesting are the strategic partnerships like CD Projekt and Epic with Nvidia, others with AMD - almost like on consoles.

Ray tracing on consoles just means that the baseline performance will be there no matter what GPU with hardware ray tracing one chooses. It doesn't negate the utility of more ray tracing performance.

more info re the new AMD cards, specifically around features and how they have worked together on ryzen and rdna2 when designing the products.

There is zero technical reason why Smart Access Memory wouldn't work on any platform, nor any reason it wouldn't work just as well on any PCI-E 4.0 platform. Limiting it to the Ryzen 5000 series is purely a marketing move and has nothing to do with any special feature or design consideration of the Ryzen platform. AMD's video drivers are probably just checking CPU identifiers and allowing it to be enabled if there is a Ryzen 5000 present.

It wouldn't surprise me in the slightest for a hack or mod to show up that allowed the feature to be enabled on other CPUs, nor would it surprise me if there was actually more of a relative benefit on Intel parts, especially when they get PCI-E 4.0+ shortly, as they still have lower memory and I/O latency.

While this won't deter me from upgrading to a 5000 series at some point, it's sure not going to incentivize it. As a GPU-only feature, it would have been a modest incentive for me to choose an RDNA2 part, but arbitrary CPU and chipset restrictions are a major turn off, even though I already have a B550 board and was planning on replacing my 3900X with a 5950X at some point (after the 5950X yields improve and it's seen it's first price cut).
 
My displays are 1440p144/165 and 4k60, and I even in many legacy titles, I would want more performance than I could get out of a 3090 or 6900XT. There are usually IQ meaningful settings I can jack up. Elite: Dangerous and Path of Exile are both old, and are both completely GPU limited on a 3080 on settings that I actually use.



Ray tracing on consoles just means that the baseline performance will be there no matter what GPU with hardware ray tracing one chooses. It doesn't negate the utility of more ray tracing performance.

Because you are at the very top of the gamer ecosystem. :)
It's not representative, but as far as I can tell after reading reddit posts at the enthusiast subreddits, I get the impression that most are building a system to run Cyberpunk with ray tracing on on their -more or less- standard 1440p monitors. That game -like most modern titles- will probably scale very well with lower end systems, it is the ray tracing that bring the wow factor IMHO - and not continuous progress towards 8k displays and 300+hz monitors.

Ray tracing: so you are saying that Nvidia's RTX will be fully utilized in every DX12 ray tracing application? Because if so, I think we will see a huge gap in performance as AMD doesn't seem to have dedicated hardware for it. For example, I noted that Battlefield V was benchmarked by AMD in DX11 too (why?).
 
I was quite happily prepared to pull the trigger on a 3080 - only unavailability has kept me from doing so - but now my finger pressure has eased off...would any of you knowledgeable folk hazard a guess as to what would be best for VR performance, based on the paper specs?

The RTX 3080 has a 1.71GHz clock and 10 GB of GDDR6X, while the RX 6800 XT has 2.25GHz and 16GB of GDDR6. Performance in Elite using a Rift S is my main VR want.
 
Apparently the full Navi21 chip has 128 ROPs. This goes a long way to explaining how competitive it is. Earlier information strongly suggested 64.

Both the 6900XT and 6800XT appear to have all 128, but some sources are saying the 6800 has 96. Looking for confirmation now.

If true, this makes the 6800XT even more of a clear-cut winner in this lineup.

Ray tracing: so you are saying that Nvidia's RTX will be fully utilized in every DX12 ray tracing application?

No, but most ray tracing applications have meaningful settings that will result in ray tracing performance being the primary limiting factor.

Because if so, I think we will see a huge gap in performance as AMD doesn't seem to have dedicated hardware for it.

I initially expected AMD to just leverage co-opted shaders to accelerate ray tracing, but they do seem to have dedicated hardware for it in RDNA2, in the form of one Ray Accelerator per CU. Other than this being tightly coupled to the Infinity Cache, little is known about this implementation.


For example, I noted that Battlefield V was benchmarked by AMD in DX11 too (why?).

They supposedly used whichever API performed best.

I don't know much about BFV, so I can't really comment on the implications of the API choice in that title. I have heard DX12 has some issues though.
 
I was quite happily prepared to pull the trigger on a 3080 - only unavailability has kept me from doing so - but now my finger pressure has eased off...would any of you knowledgeable folk hazard a guess as to what would be best for VR performance, based on the paper specs?

The RTX 3080 has a 1.71GHz clock and 10 GB of GDDR6X, while the RX 6800 XT has 2.25GHz and 16GB of GDDR6.

Those specs don't say much of anything, absent a lot other context, and there other specs to consider.

Performance in Elite using a Rift S is my main VR want.

Elite doesn't really need much VRAM (the game will use all you've got as it's not prematurely evicting assets any more, but you aren't going to notice a performance difference from capacity itself between these parts, unless you go really bonkers on certain texture sizes for no good reason) and doesn't have any ray tracing.

Elite also is not particularly shader dependent, so having truckloads of ALUs isn't likely to be a good performance indicator either.

My vaguely educated guess, based on the specs known for each part and what experience I have, is that the 6800XT will have a small edge over the 3080 in Elite: Dangerous, including in VR. The reason for this is that I know ED is primarily fill rate limited and the the 6800XT apparently has more ROPs and slightly more TMUs as well as significantly higher clocks--though most 3080 without any tweaking will sit around 1950MHz in a non-shader limited game, so the difference is not as extreme as the paper specs make it sound.

However, AMD's drivers are generally much higher overhead in DX11 than NVIDIAs. If you don't have a sufficiently fast CPU, this could easily crap up your VR experience. That said, it's easy to get a sufficiently fast GPU, because Elite: Dangerous itself is not particularly CPU limited. Any recent AMD or Intel CPU with at least six physical cores should be plenty fast enough to keep over 80 fps (the Rift S is 80Hz, IIRC?) at all times.

The difference probably won't be huge in either case. Both parts should be well capable of driving a Rift S with significant amounts of supersampling at well beyond ultra in-game settings, but if I had to choose, sight unseen, I'd probably grab the 6800XT for ED VR.

Of course, waiting for actual tests, if possible, would be wise.
 
Those specs don't say much of anything, absent a lot other context, and there other specs to consider.



Elite doesn't really need much VRAM (the game will use all you've got as it's not prematurely evicting assets any more, but you aren't going to notice a performance difference from capacity itself between these parts, unless you go really bonkers on certain texture sizes for no good reason) and doesn't have any ray tracing.

Elite also is not particularly shader dependent, so having truckloads of ALUs isn't likely to be a good performance indicator either.

My vaguely educated guess, based on the specs known for each part and what experience I have, is that the 6800XT will have a small edge over the 3080 in Elite: Dangerous, including in VR. The reason for this is that I know ED is primarily fill rate limited and the the 6800XT apparently has more ROPs and slightly more TMUs as well as significantly higher clocks--though most 3080 without any tweaking will sit around 1950MHz in a non-shader limited game, so the difference is not as extreme as the paper specs make it sound.

However, AMD's drivers are generally much higher overhead in DX11 than NVIDIAs. If you don't have a sufficiently fast CPU, this could easily crap up your VR experience. That said, it's easy to get a sufficiently fast GPU, because Elite: Dangerous itself is not particularly CPU limited. Any recent AMD or Intel CPU with at least six physical cores should be plenty fast enough to keep over 80 fps (the Rift S is 80Hz, IIRC?) at all times.

The difference probably won't be huge in either case. Both parts should be well capable of driving a Rift S with significant amounts of supersampling at well beyond ultra in-game settings, but if I had to choose, sight unseen, I'd probably grab the 6800XT for ED VR.

Of course, waiting for actual tests, if possible, would be wise.
Thanks for the reply - my CPU is an i7 8700 so it sounds like either/or is the choice for the graphics card.

Stock levels (ie. sold out immediately 😁) will probably allow me to wait on reviewer’s tests but VR tends not to get mentioned.
 

Robert Maynard

Volunteer Moderator
Stock levels (ie. sold out immediately 😁) will probably allow me to wait on reviewer’s tests but VR tends not to get mentioned.
I've been wondering what effect Infinity Cache will have on VR - as it effectively needs to render twice, once for each eye - and if the required data is cached then it may be quicker than going to GPU RAM each time.
 
I've been wondering what effect Infinity Cache will have on VR - as it effectively needs to render twice, once for each eye - and if the required data is cached then it may be quicker than going to GPU RAM each time.

Infinity cache is just there to mitigate the need for more complex memory controllers and trace routing. I'm doubtful that it will result in significant VR specific gains. It's not beyond the realm of possibility, but I don't expect memory bandwidth/performance to be the limiting factor.
 
I forgot it was the 3070 "launch" today .... From what I have seen there seems to have been next to no availability of that card either, is that correct ?
 
I forgot it was the 3070 "launch" today .... From what I have seen there seems to have been next to no availability of that card either, is that correct ?

Sold out in minutes.

Hardware Unboxed's AIB 3070 review seems to indicate a lack of overclocking potential within power limits.

That's the entire Ampere line up. At least the 3070FE doesn't have the thermal issues I've seen on the 3080.

I still want to see what a totally unconstrained Ampere can do, but I can't find any competent water cooled reviews/tests with a higher power board.

People have made the comparison to Fermi, usually as a disparagement, but if I thought Ampere could do what Fermi could, I'd get one in a heartbeat. My GTX 480 was one of my favorite cards because, in addition to being the fastest card out at the time, it was also an awesome overclocker, if you could cool it. I gave mine it's own modest loop (GPU only block, retaining the stock VRM and GDDR cooling plate and blower, 50mm thick 120 rad, and a 120x38 San Ace fan with a shroud/duct to remove the hub's deadzone, all paired with some cheap XSPC pump/res combo) and was able to take it from 700MHz stock to 918MHz stable. It was as fast as my 5850 Crossfire setup, by itself....also used the same power (about 400w in Furmark, which was a lot for a single video card in 2010).

I don't think anywhere near 30% OC would be possible on Ampere, at least not without sub-ambient cooling, but a 450w model that was kept at ~50C maximum load would probably be pretty impressive.
 
Hahaha 450w model. A 6 slot card? How do you suggest this? Lets push the rx6000 over 500 watt.. Why not 800?
There is a limit to silicon. That is why Nvidia resorting to 350w cards is a red flag. Leo @kitguru had a nice take on it in his recent video

I am looking forward to Nvidia's next move. I hope it involves pricing but I doubt nvidia will want to dig into its margin. This will be good for the consumer.
 
Hahaha 450w model. A 6 slot card? How do you suggest this?

The EVGA RTX 3080 FTW3 Ultra is already 450w and the ASUS 3080 STRIX is 420w. Three 8-pin PCI-E connectors and a PCI-E slot are good for 525w before exceeding spec.

Much past 450w board power is not practical to cool with any reasonable air cooler, but a full cover block can handle quite a bit more than 450w, with a single slot.

Hell, my GTX 1080 Ti is 375w and I'm cooling it quite well with a crappy ASETEK 120mm AIO on the GPU, a fist full of assorted memory/VRM sinks epoxied to the board, and a 92mm fan over the VRM. A not too dissimilar setup (much thicker rad though) cooled my GTX 480 at 400w+ well enough.

A single slim 360 rad can move 500w of heat with a better temperature delta and less noise than a triple slot air cooler will see while only moving 300w.

Lets push the rx6000 over 500 watt..Why not 800?

I'm sure whatever the top LN2 record for a Navi21 part is, it will peak at over 800 watts, card only, when they bench it.

There is a limit to silicon.

Which is ultimately dependent on power density and temperature.

Even with plain watercooling, 450w isn't anywhere near the GA102's limit.

I am looking forward to Nvidia's next move. I hope it involves pricing but I doubt nvidia will want to dig into its margin. This will be good for the consumer.

NVIDIA will still be able to claim a decisive win where ray tracing is involved and they'd much rather release a refresh, probably on Samsung initially (3070 Ti and 3080 Ti are fairly credible rumors), then TSMC later on. I'm doubtful of any forthcoming price drops, as all indications are that they will be able to sell every Ampere part they or their AIB's ship.
 
Top Bottom