Hardware & Technical Latest Nvidia Ampere Rumours

To me the most telling thing was AMD's coyness about showing ray-tracing images or stats in their demos. I'm waiting for third party evaluation of their ray-tracing performance because this make me unsure it will be as good as NVidia's.
 
To me the most telling thing was AMD's coyness about showing ray-tracing images or stats in their demos. I'm waiting for third party evaluation of their ray-tracing performance because this make me unsure it will be as good as NVidia's.

RDNA2's ray tracing performance almost certainly won't be as good as Ampere's. I expect NVIDIA's entire product stack to hold a commanding lead here, dollar for dollar.

The question becomes "how much ray tracing do we actually need?"

AMD's featured titles that have ray tracing use it very sparingly, where it will actually matter (Dirt 5's shadows, for example), almost certainly because they don't have RT performance to squander. Other developers will follow suite, likely only over emphasizing ray tracing at the behest of NVIDIA, who will use it to claim a decisive advantage over AMD, because, on paper, it will be. I suspect that we'll ultimately have a situation similar to what we had with tesselation years ago. NVIDIA pushed extreme levels of tesselation, far beyond what was actually beneficial to IQ, at the expense of the gaming experience on their own parts, just because it hurt AMD's parts more.

However, NVIDIA will have trouble dictating ray tracing to the masses despite their market share lead, because most people won't have high-end Ampere parts, so all but a handful of raytraced titles will have at least the option of more conservative RT settings, many of which probably won't look any worse.

In the end, I'm not terribly worried about not having enough RT performance with a Radeon RX 6xxx for the same reason I'm not worried about not having enough VRAM with a 3080...the situations where it's a major factor will be fairly outliers.
 
So ray tracing isn't like on/off, there are settings in-between? It could lease new life to Turing cards...

It can be used to whatever extent developers like and most games featuring ray tracing have quality levels for it.

And yes, Turing, or even cards without any hardware ray tracing at all, could benefit from more conservative/granular ray tracing implementations.

People want to game with these cards. They do not want extra heaters in the house and go on a mod spree.

People want to do a lot of things with these cards.

I'm going to game, encode, fold, and mine (that huge LLC on the Navi21 will probably let my 6800XT pay for itself this winter) on whatever I get. I am also going to take it apart and mod the snot out of it at some point. All of these areas concern me, and the hardware itself is a major hobby of mine.

As for 'extra heaters', whatever power limit I have to play with is just that, a limit. I can use as much or as little as I like. It's 4C outside now and in a month my furnace will struggle to keep my old house at 20C inside, so any amount of extra heat from my computers is no burden at all. In the summer, I have the opposite problem, but since a ~500w card can be run at ~200w while only losing about 20% of it's performance, I'll be in a better position than if I gave a damn about the stock settings.

I can assess a part on it's stock merits, and find this to be a relevant baseline, but I'm personally far more concerned with what's actually in the box, than the mix of marketing and vague guidelines printed on the box. I know my use cases are not typical, but I'm also not pushing them on anyone else, or presuming what anyone else will choose to do with their parts.

The race is being run and you are declaring the winner all the time over and over.

Is this a problem?

Between these new parts, there are many races being run, and they aren't all going to have the same winner, because they are different parts, with different properties, amounting to different strengths and weaknesses.

You comment only on what suits you. How convenient of you not reacting to Leo's video....

If you want me to comment on something, you'll have to provide a link. I can't be expected to be aware of every video every editor of every tech site produces.

Picking apart only statements that you can attack.

Why would I pick apart something I agree with?

You are way too biased.

This comes of as more than a little hypocritical.

What do you think my biases are?

Certainly in reacting to other peoples posts and needing 'to correct others so you are the only one giving the correct answer'.

I've made no claims of being the only one with correct answers, nor that I'm infallible. I think I usually do a pretty good job assessing the information on hand, but that information is often in flux, and even when it's not, my interpretations are not always correct...it's still more informed and ultimately more accurate than quite a bit of the content pushed by people making a buck off Youtube.

You can speculate and make what-if statements but others can not?

I will freely speculate about what I suspect the implications of some facet of a product are, while also challenging assertions that one can divine everything there is to know about the underlying hardware by the arbitrary specifications imposed upon the products built from them.

Presenting my view isn't stopping anyone else from doing the same.
 
AMD has released some more benchmarks comparing the RX 6000 series with the RTX 3000 and 2080 Ti:

Note that those appear to have Smart Access Memory (which appears to be a full, working, implementation of the WDDM resizable BAR support, allowing more efficient mapping of GPU memory address space to system memory, reducing overhead of memory operations...this is one of the reasons AMD is locking it to their platforms; it requires firmware support to correctly allow the PCI-E root hub access to the appropriate memory address range) enabled, but not Rage Mode (a power limit increase, allowing higher boost clocks).

I take AMD at their word about these results, but it's still wise to keep in mind that this is AMD's chosen line up and they wouldn't be showing something that didn't depict their parts in a positive light. Still, it's a pretty good spread of titles, over all three major APIs.

If you find AMD's site tedious, VideoCardz has summarized them here: https://videocardz.com/newz/amd-dis...900xt-rx-6800xt-and-rx-6800-gaming-benchmarks

I haven't looked at them in depth yet, but I do note that the AMD parts seem to scale slightly less well with resolution than the NVIDIA parts. Probably more of a shader limitation than a memory limitation as the 6900XT loses less performance than the other Navi21 parts, despite the entire RX 6000 series having the same memory configuration.
 
Some more recent rumors on possible counters to the RX 6000 line up from NVIDIA:


Looks like the previously canceled 3080 20GiB could be replaced by a 3080 Ti with 20GiB of memory and more SMs. This would make sense as a 3080 with double the memory would increase cost, but not make the part look more favorable vs. the 6800XT or 6900XT in most benchmarks. An SKU with increased SMs would likely allow it to convincingly beat the 6800XT and compete directly with the 6900XT...at the cost of cannibalizing some of the 3090's market. Will be interesting to see how that pans out.

It also appears that all the the previously rumored 3070 Ti SKUs have been canceled. This is unfortunate as that part looked like it would have been better balanced than the 3080 and faster than the 6800, but likely would have been appreciably cheaper than the 6800XT.

Anyway, we're two weeks away from 6800 and 6800XT availability. I don't think I'm going to wait for non-reference models or even comprehensive reviews...though I'll certainly read them as they become available. I've decided I'll try to get a 6800XT out of the gate for the following reasons:

  • The 3080 FE is the only 3080 I am likely to be able to get my hands on that will fit in my intended system and from first-hand experience, I do not want the FE cooler (memory/VRM temperatures are too high).
  • A GA102 3070 Ti is either not being released or has been pushed back to a far later date. My system will be ready for a new card very soon.
  • Most of the games I expect to play aren't likely to be very ray tracing heavy, and some are more fill rate than shader dependent, so the 6800XT will probably match or beat the 3080 FE anyway.
  • I expect Navi21 supply to be tight...not as bad as Ampere, but no where near enough to meet demand.
  • Neither the 6800 nor the 6900XT are really contenders on the same level of value as the 6800XT. The 6800 is overly castrated for it's price, while there is even less of a difference in performance between the 6800XT and the 6900XT than there is between the 3080 and 3090. The 6900XT may well have a higher peak power limit, or some advantage in binning, but I'm sure not going to pay 350 dollars for it.
  • AMD has a long running trend of solid reference PCBs (the HD 5850 was the last time I've felt like a reference card was a major negative) reference 6800XT cooler is unlikely to be deficient.
  • Non-reference Navi21 parts are still a ways off.

Obviously, we'll have to wait for reviews to see where everything lands, but with what is known at this point I'd bet on:
  • The 6800XT is going to trade blows with the 3080, while the 6900XT will prove to be between the 3080 and 3090, overall.
  • All of these parts are going to be power limited in most GPU limited scenarios. Thus out-of-the box power consumption will typically be pegged at TDP and power efficiency will be slightly in favor of AMD, again out-of-box.
  • Ray tracing will be a particular weak point for RDNA2, relative to Ampere, but not by enough to make or break performance in most titles using ray tracing. It will skew the averages of test lineups that include them in favor of NVIDIA, however.
  • The differences between the architectures and SKUs will result in a new counter-optimization war between AMD and NVIDIA with AMD focused titles wasting VRAM to make the 3080 and 3070 look bad, while NVIDIA backed titles will have wasteful levels of RT and excessive shader effects to make RDNA2 look bad. People will quickly realize that using the next lower texture setting or ray tracing quality barely changes IQ, but fixes all performance issues on opposing hardware.

Disclaimer: Yes, I am speculating about unreleased hardware in the thread for speculative rumors about unreleased hardware.
 
Last edited:

Robert Maynard

Volunteer Moderator
It seems that RDNA2's quoted (but not yet independently verified) performance and pricing has lit a bit of a fire under Team Green's product stack.
 
Some more recent rumors on possible counters to the RX 6000 line up from NVIDIA:


Looks like the previously canceled 3080 20GiB could be replaced by a 3080 Ti with 20GiB of memory and more SMs. This would make sense as a 3080 with double the memory would increase cost, but not make the part look more favorable vs. the 6800XT or 6900XT in most benchmarks. An SKU with increased SMs would likely allow it to convincingly beat the 6800XT and compete directly with the 6900XT...at the cost of cannibalizing some of the 3090's market. Will be interesting to see how that pans out.

It also appears that all the the previously rumored 3070 Ti SKUs have been canceled. This is unfortunate as that part looked like it would have been better balanced than the 3080 and faster than the 6800, but likely would have been appreciably cheaper than the 6800XT.

Anyway, we're two weeks away from 6800 and 6800XT availability. I don't think I'm going to wait for non-reference models or even comprehensive reviews...though I'll certainly read them as they become available. I've decided I'll try to get a 6800XT out of the gate for the following reasons:

  • The 3080 FE is the only 3080 I am likely to be able to get my hands on that will fit in my intended system and from first-hand experience, I do not want the FE cooler (memory/VRM temperatures are too high).
  • A GA102 3070 Ti is either not being released or has been pushed back to a far later date. My system will be ready for a new card very soon.
  • Most of the games I expect to play aren't likely to be very ray tracing heavy, and some are more fill rate than shader dependent, so the 6800XT will probably match or beat the 3080 FE anyway.
  • I expect Navi21 supply to be tight...not as bad as Ampere, but no where near enough to meet demand.
  • Neither the 6800 nor the 6900XT are really contenders on the same level of value as the 6800XT. The 6800 is overly castrated for it's price, while there is even less of a difference in performance between the 6800XT and the 6900XT than there is between the 3080 and 3090. The 6900XT may well have a higher peak power limit, or some advantage in binning, but I'm sure not going to pay 350 dollars for it.
  • AMD has a long running trend of solid reference PCBs (the HD 5850 was the last time I've felt like a reference card was a major negative) reference 6800XT cooler is unlikely to be deficient.
  • Non-reference Navi21 parts are still a ways off.

Obviously, we'll have to wait for reviews to see where everything lands, but with what is known at this point I'd bet on:
  • The 6800XT is going to trade blows with the 3080, while the 6900XT will prove to be between the 3080 and 3090, overall.
  • All of these parts are going to be power limited in most GPU limited scenarios. Thus out-of-the box power consumption will typically be pegged at TDP and power efficiency will be slightly in favor of AMD, again out-of-box.
  • Ray tracing will be a particular weak point for RDNA2, relative to Ampere, but not by enough to make or break performance in most titles using ray tracing. It will skew the averages of test lineups that include them in favor of NVIDIA, however.
  • The differences between the architectures and SKUs will result in a new counter-optimization war between AMD and NVIDIA with AMD focused titles wasting VRAM to make the 3080 and 3070 look bad, while NVIDIA backed titles will have wasteful levels of RT and excessive shader effects to make RDNA2 look bad. People will quickly realize that using the next lower texture setting or ray tracing quality barely changes IQ, but fixes all performance issues on opposing hardware.

Disclaimer: Yes, I am speculating about unreleased hardware in the thread for speculative rumors about unreleased hardware.

I decided to wait until the dust settles, again only Flight Sim VR could trigger real need for new GPU, but that game needs another year of refinement and exit its de facto beta state. Besides, loads of other expenses are piling up these days.

But to me the 3080ti is shaping up to be the card I want. I guess it will be priced at $950-1000, so not exactly good value vs. The 3080, but that price point would be OK with me.

VRAM cooling: hasn't Nvidia fixed it with extra padding already?
 
VRAM cooling: hasn't Nvidia fixed it with extra padding already?

Not all 3080FE samples sold got the extra pad, and the 5-10C that pad saves still leaves very little thermal margin if you are hitting the power limit of the board and don't want the fan to sound like a tornado.

I cannot play RT or shader limited titles on my brother's 3080 FE without the fan kicking up to 100% PWM (3400+ RPM, which is loud) or the card temperature throttling itself down to 1200-1300MHz core (which is slow). He also won't let me take it apart to see if that pad is there or not.

I know that temp limit isn't the GPU because the GPU temps can barely crack 60C at reduced fan speed, but the temperature throttle persists. It could be that this sample is just defective/an outlier, but with everyone measuring 85C+ on the back of their review samples during less stressful tests, I do not think this is the case.
 
And here's me replaying Black Mesa right now. So I could run that on virtually anything. :)

Joking aside. I'm keen to see what the GTX3050ti is like. I think I might upgrade at that point, but as I don't feel the need to game at over 75Hz, 1080p and my monitor can't do that anyway I don't know what value I would extract from a high-end GPU. Even the difference between medium and ultra settings is pretty marginal for me these days.
 
And here's me replaying Black Mesa right now. So I could run that on virtually anything. :)

Joking aside. I'm keen to see what the GTX3050ti is like. I think I might upgrade at that point, but as I don't feel the need to game at over 75Hz, 1080p and my monitor can't do that anyway I don't know what value I would extract from a high-end GPU. Even the difference between medium and ultra settings is pretty marginal for me these days.

Doesn't sound like you have any reason be in a rush for an upgrade, but from the currently expected specs, the RTX 3050 Ti looks like it would be roughly comparable to an RTX 2070/2080.
 
I've been looking at watercooling options for my brother's the 3080FE. Probably leaning toward a full-cover block setup despite the cost, for less complexity and ease of restoring it to default configuration (he wants to keep the warranty, so epoxying heatsinks all over the board is right out).

Current full cover block options for the 3080FE appear to be limited to:

EK's block is a massive rip-off for anyone that doesn't care about RGB, so I can eliminate that. Choice between the Bykski and the Corsair blocks is a bit more difficult.

Also, some more information on RTX vs. DXR and which titles are using proprietary extensions that will limit ray tracing to NVIDIA hardware:
 
Just a rumor currently, but one that I find credible:

(that huge LLC on the Navi21 will probably let my 6800XT pay for itself this winter)

One of the reasons I've been looking at RDNA2 is the mining potential. Most profitable GPU hashing algorithms that haven't been taken over by dedicated mining ASICs are memory-hard specifically to keep them profitable on consumer GPU hardware. The large cache on Navi21 would naturally benefit the hashing performance in such algorithms.

Most GPU miners have probably been anticipating this from the moment that 128MiB infinity cache was rumored.

Unfortunately, this also reduces the prospects for a more ready supply of these parts, especially early on.
 
Most GPU miners have probably been anticipating this from the moment that 128MiB infinity cache was rumored.

Unfortunately, this also reduces the prospects for a more ready supply of these parts, especially early on.
That’s grim news, after being frustrated when trying to get a 3080 it sounds like my shift target will suffer the same fate.

I’m really hoping to upgrade my 1080 prior to Odyssey launching, but it’s starting to look less and less likely 😅
 
That’s grim news, after being frustrated when trying to get a 3080 it sounds like my shift target will suffer the same fate.

I’m really hoping to upgrade my 1080 prior to Odyssey launching, but it’s starting to look less and less likely 😅
However your 1080 should be more than enough to play Odyssey..

🦠 🦠 🦠 🦠 🦠 😷
 
Top Bottom