Hardware & Technical Latest Nvidia Ampere Rumours

Among that I'm eyeing the MSI Ventus OC, hoping it will get a better a better PCB than the standard. I don't plan on heavy overclocking long terms (so I'm ok with lower power limit and 2x8pin), short term I'll rather undervolt as I am slightly limited on PSU.

The FE and reference PCBs are solid and even the budget AIB boards look to be more than sufficient. If you aren't trying to exceed power limits, all that matters is luck of the draw on the GPU itself and the cooling that's on it.
 
The FE and reference PCBs are solid and even the budget AIB boards look to be more than sufficient. If you aren't trying to exceed power limits, all that matters is luck of the draw on the GPU itself and the cooling that's on it.

That means you think going for the 'OC' version is a waste of money, as the PCB is not binned?
 
That means you think going for the 'OC' version is a waste of money, as the PCB is not binned?

The GPUs themselves might be, and I suspect most of the AIBs are saving the top GPU bins for their top parts, but the PCBs themselves will have negligible impact on anything, except at the very top end (where they may have better power delivery and GDDR6X trace work).

I suspect most of those "Bin 2" parts are being saved for the as yet to be seen top-tier SKUs like the FTW3 Ultra (EVGA), the Lightning (MSI), and the Aorus Xtreme (Gigabyte), etc. Unless you are holding out for one of those models, you could buy almost anything and likely have the same odds at getting a good (or bad) die. Additionally, not all dies that would fall into the lower bins would automatically be worse for undervolting/underclocking as the same properties that can result in poor frequency scaling can also result superior efficiency at more conservative clocks (higher clocking parts also have a strong tendency toward greater leakage current).
 

Just an additional sidenote...
1600438699057.png
 
Ok... So almost paper launch.. Why tho...

Could moores law and Adored be correct about Nvidia wanting to sell to AIBs and not sell them themselves cause the cooler?
 
Ok... So almost paper launch.. Why tho...

Could moores law and Adored be correct about Nvidia wanting to sell to AIBs and not sell them themselves cause the cooler?

Literally two posts ahead I posted the confirmation that the quantities were in-line to previous launches, this wasn't a paper launch. Watch the GN video on YT.

Could we please ignore that garbage channel and have a reasonable converstation?
 
GN?
And no, I can decide for myself even if you mind. There are always more sides to the story.

With a quote "in-line to previous launches" what kind of number are we talking about? Cause it does not sound like a number that Nvidia is proud of; or they would have mentioned it. Sold out in 5 minutes.. aka almost paper launch. Next batch in december.. Fanboy much?
 
GN?
And no, I can decide for myself even if you mind. There are always more sides to the story.

With a quote "in-line to previous launches" what kind of number are we talking about? Cause it does not sound like a number that Nvidia is proud of; or they would have mentioned it. Sold out in 5 minutes.. aka almost paper launch. Next batch in december.. Fanboy much?

It says Nvidia partner, i.e. AIB. But don't bother with reading, listen to all sides of the story at MLID.
 
Something interesting has popped up recently re RDNA 2 with rumours around a so called Infinity cache for RDNA2 and speculation if this allows for the use of what would be seen to be an inadequately sized memory bus.

Of course that gets certain types going all Canonn research group and doing deep dives. They have found this just published patent from AMD and it makes very interesting reading.


Its something referred to as Adaptive cache reconfiguration by clustering. Now this means nothing to mere mortals like myself. However its got those who do know very excited about the possibilities and seems to be explained a bit better in this research paper by i believe the people who did the patent.


In this doc they state that it improves GPU performance between 22% and 52% while energy efficiency is improved by 49% for applications that benefit from this type of memory configuration. Deep learning programs are improved by 2.3x

If this is indeed included in RDNA 2 then it bodes well for how it will perform.
 
Something interesting has popped up recently re RDNA 2 with rumours around a so called Infinity cache for RDNA2 and speculation if this allows for the use of what would be seen to be an inadequately sized memory bus.

Of course that gets certain types going all Canonn research group and doing deep dives. They have found this just published patent from AMD and it makes very interesting reading.


Its something referred to as Adaptive cache reconfiguration by clustering. Now this means nothing to mere mortals like myself. However its got those who do know very excited about the possibilities and seems to be explained a bit better in this research paper by i believe the people who did the patent.


In this doc they state that it improves GPU performance between 22% and 52% while energy efficiency is improved by 49% for applications that benefit from this type of memory configuration. Deep learning programs are improved by 2.3x

If this is indeed included in RDNA 2 then it bodes well for how it will perform.

Interesting stuff. Basically taking advantage of the latency tolerance of many workloads to share L1 caches between CUs for less duplication of coherency data and greater effective capacity, when and where need, with the ability to retain private L1s for CU groups not suffering from high miss rates. Should definitely increase IPC, if present on RDNA2 (and given the patent dates, this is quite possible).

However, many HPC/DL workloads aren't really comparable to basic rasterization workloads which are often so parallel that duplication and miss rates are low with private caches (NVIDIA has fixed shared cache clusters that are explicitly broken up into private caches for standard rasterization). I would be hesitant to claim any great gaming graphics performance uplift due to this new dynamic cache structure.

What I do think this may do for gaming is improve AI and ray tracing workloads, possibly by allowing the same unversal shaders to more readily fill specialized roles on demand and allow AMD to get decent ray tracing and adaptive image scaling performance without having to implement dedicated hardware (like RT or Tensor cores) that would just sit idle during conventional rasterization.

I'd think a large last level cache would be significantly more beneficial for mitigating the need for a fast main memory interface for traditional graphics workloads, but there is no reason why both this dynamic L1 clustering and a large LLC couldn't be implemented simultaneously.

On a side note, both of these potential features seem like they would be of big help to memory-hard crypto algorithms...
 
It truly is barren on the 3080 front ....

You cant seem to even pre-order through Scan any more .... Overclockers are preorder only (except EVGA which you cant even pre-order) ... CCL no longer even lists 30x0 Cards ... Aria is pre-order only ... and ... still no listings on Amazon.co.uk .

Edit:

And the Overclockers 3080 Owners thread is hardly fully of people showing their cards: https://www.overclockers.co.uk/forums/threads/the-ampere-rtx-3080-owners-thread.18898294/
 
Last edited:
Even if the supply is comparable to prior generations, demand is undoubtedly much higher than typical. It's been more than four years since Pascal, Turing was barely worthy a successor, a large portion of the potential market is spending more time at home due to the pandemic, and due to the release coinciding with a new console generation, there are an unusually large number of graphically demanding games being released around the same time, which surely drives demand even further.

This also happens to be the first time in several generations that NVIDIA has lead a launch with one of their big-die parts. Maxwell, Pascal, and Turing all lauched with x104 parts, with the x102s to follow several months, sometimes almost a year, later. Having potent mainstream parts and AMD's competition already available in quantity when the big die parts hit surely alleviated some demand pressure.

Yeilds are always crappy on big-die parts, at least initially, which is one of the big reasons NVIDIA hasn't been releasing them first, and why AMD hasn't been making them at all (Fiji not withstanding).
 
Top Bottom