General / Off-Topic According Jenseng Huang, the people are "Crazy" to buy a GTX instead of a RTX

Personally when I upgrade my GPU I will definitely look for a raytracing capable card.... So IF I was upgrading today I would likely get a 2080... BUT that said with so few games out right now there is no temptation at all for me to upgrade my 1080ti and by the time I do hopefully there will be AMD with a compatible part and Nvidia with a 7nm part
 
I'd buy an RTX 2070 not to run games (at least not as a main function) but to use the included tensor cores to run Lc0 and variants.
 
Last edited:
On a semi-related tangent, I'm finding this AMD vs. NVIDIA feature battle both both annoying and ironic.

NVIDIA, instead of refreshing a basic post-process sharpen filter that they already have (it's been burried in GFE for years), chooses to push DLSS...which, in hindsight, has exactly zero purpose other than to give tensor cores something to do.

AMD releases RIS, which is a basic post-process sharpen filter that outdoes DLSS in essentially every meaningful way.

NVIDIA, in their 436.02 drivers, finally refreshes their basic post process sharpen filter to be competitive, making DLSS even more comical, but keeps it part of GFE, which means I'm never going to use it because GFE is total ass.

We also have both companies digging up basic render queue depth settings that both have had for over fifteen years and rebranding them as something special.

AMD never had the setting exposed in their drivers, but their flip-queue size was always adjustable via the registry, or third party utilities. They added some automatic tuning of this a few years ago, and just recently added a toggle to the drivers called 'Radeon Anti-Lag', which, as far as I can tell, just sets the queue to one or zero, which is what I've been manually doing since the Radeon 9700 days in ~2003.

NVIDIA has had a 'max frames render ahead' setting right in their drivers forever, but a few years ago they pulled the option to set it to zero, leaving the 1-4 frames intact (and allowing as many as 8 to be set other ways). Now, with the new 436.02 driver, the feature has been rebranded to 'Low Latency Mode", and now only has three settings: the default 'off' (three frames), 'on' (a single frame), or 'ultra' (the zero frame queue depth they pulled years ago).

I wonder what feature, that has always been there, will see renewed attention next?
 
I don't know about you, but after seeing this, I believe that the RTX technology might reach maturity next year. And maybe it is time to think in investing in an RTX card. Who knows, maybe even the next elite expansion might benefit from it?


 
I don't know about you, but after seeing this, I believe that the RTX technology might reach maturity next year. And maybe it is time to think in investing in an RTX card. Who knows, maybe even the next elite expansion might benefit from it?

NVIDIA's RTX parts with dedicated RT hardware only have a niche as long as ray tracing hasn't hit maturity.

By the end of next year, everyone (AMD, Intel, and NVIDIA) will have new high-end GPUs to sell and even the ones without dedicated raytracing hardware will have enough low precision shader grunt to do passable ray-tracing, where ray-tracing is needed most, with those general purpose shaders. A few years after that, I'd be surprised if NVIDIA even has Tensor or RT cores in their products...it was a rare, protracted, lapse in competition that gave them the surplus transistors to make such specialized hardware (that added nothing to rasterization performance) viable.

Minecraft (which uses DX12 DXR) does allow NIVIDIA show off RTX without a huge performance hit, likely even without a huge degree of optimization, because Minecraft's basic graphics are so simplistic. It's also why you won't need an RTX card to do the same thing next year.

As far as Elite: Dangerous goes, I wouldn't get my hopes up. Frontier would need a DX12 renderer, which they are unlikely to prioritize, as that would result in them needing to maintain two versions of the game again. On top of that, ED is already pretty demanding. Even next year, we'd probably be looking at 500+ dollar parts being entry-level stuff for this. I mean, I wouldn't mind being proven wrong, but I just don't see the cost/benefit ratio for Frontier being favorable with regard to adding raytracing to ED.
 
Personally, that Minecraft RTX demonstration just highlights to me how slapping some raytracing atop a game not designed for that doesn't turn out well. Take a look at Quake II's RTX version: compared to the original, it's too bright. Take a look at Minecraft: the RTX version is too dark instead. Sure, it looks great as a novelty, but I think that nobody in survival mode would play with it switched on all the time. Good for showing off your builds in creative mode, I guess.

NVIDIA, instead of refreshing a basic post-process sharpen filter that they already have (it's been burried in GFE for years), chooses to push DLSS...which, in hindsight, has exactly zero purpose other than to give tensor cores something to do.
Also, probably to allow nVidia to milk developers a bit. I mean, they don't train the neural net for free, do they?
 
I would like to see dlss supersampling used to boost elite in vr mode

We have tensor cores sitting there doing very little while the cuda cores are busy bees.

As we know when running in vr mode you supersample, surely we could use the tensor cores via dlss to boost the performance and allow higher levels of supersampling to prevent reprojection or smart smoothing kicking in.
 
I would like to see dlss supersampling used to boost elite in vr mode

We have tensor cores sitting there doing very little while the cuda cores are busy bees.

As we know when running in vr mode you supersample, surely we could use the tensor cores via dlss to boost the performance and allow higher levels of supersampling to prevent reprojection or smart smoothing kicking in.

Far simpler sharpen filters that any part can (AMD has RIS, NVIDIA has a comparable one in Freestyle, and ReShade can be used on almost anything) use are proving to be generally superior to DLSS and, without some radical changes, I don't expect to see much DLSS support going forward.

I wouldn't be entirely surprised if the next gen of NVIDIA parts (which will likely show up Q1 of next year to try to beat AMD's big Navi and Intel's XE to market) completely lacked dedicated tensor cores on the consumer parts. Now that everyone is going to be on the same process, there won't be transistors to waste. RTX cores will likely still be needed on Ampere for the hardware ray tracing niche NVIDIA is pushing, but the relatively minimal work Tensor cores do in most scenarios on Turing should be easy enough to do without, especially with the trend of making conventional SMs more capable at low precision work. As it stands now it takes Turing ~30% more transistors to beat Navi's performance in non-RTX titles. This is not going to be sustainable for another generation.
 
I saw a comparision between classic rendering and the new raytracing and the differences in terms of graphics quality are negligible.
 
I saw a comparision between classic rendering and the new raytracing and the differences in terms of graphics quality are negligible.
But for a non-negligible price in FPS! :LOL:

You know, the old "We're going it take a shortcut. It may be a bit longer, but it's a much worse road" trope.
 
  • Like (+1)
Reactions: Imo
I saw a comparision between classic rendering and the new raytracing and the differences in terms of graphics quality are negligible.

Depends on where you use it and how it's implemented. Ray tracing handles lighting and reflections far better than rasterization, but most game implementations are not particularly good, certainly when the performance cost (even on current RTX hardware) is taken into account.
 
  • Like (+1)
Reactions: Imo
Depends on where you use it and how it's implemented. Ray tracing handles lighting and reflections far better than rasterization, but most game implementations are not particularly good, certainly when the performance cost (even on current RTX hardware) is taken into account.
The implementation is the problem.
According to Huang, "It just works". Yeah, not really. Half of the games that were promised a year ago has implemented it poorly or only partially. The other half just gave up and said it's not worth the trouble.
The only game with fully implemented RTX is Control and that game has been seriously crippled by Remedy's decision to go Epic exclusive. After nVidia poured so much money and know-how into ONE game that would be a showcase, they let them voluntarily throw away three quarters of the market.

So in the end, you have GPUs that sacrifice half of their potential power on features nobody is using that are being sold at twice the price "because it's future"
This product is screwed up so badly that nVidia will have really hard time introducing the next generation and their only hope is to make it REALLY simple for the devs, otherwise RTX will remain just an overpriced niche, just like current VR.
 
  • Like (+1)
Reactions: Imo
This product is screwed up so badly that nVidia will have really hard time introducing the next generation and their only hope is to make it REALLY simple for the devs, otherwise RTX will remain just an overpriced niche, just like current VR.

Raytracing is here to stay. DXR is part of DX12 now, Vulkan has non-proprietary raytracing extensions on the way, and both Sony and MS' next consoles will support hardware raytracing with AMD GPUs.

The current RTX parts only have a real niche because they have no competition at the high-end and RTX, but they did let NVIDIA get their foot in the door. How they'll leverage it in the long run remains to be seen.
 
Raytracing is here to stay. DXR is part of DX12 now, Vulkan has non-proprietary raytracing extensions on the way, and both Sony and MS' next consoles will support hardware raytracing with AMD GPUs.

The current RTX parts only have a real niche because they have no competition at the high-end and RTX, but they did let NVIDIA get their foot in the door. How they'll leverage it in the long run remains to be seen.
They're not leveraging it at all, though. I mean, apart for constantly repeating how awesome it is and how everybody should buy it. They're like Apple. "Hey, look, we made a cool thing. Buy it. Because... just buy it. Everybody likes it."
After a year, it remains a pointless gimmick.

I'm not saying it doesn't have a future. But the bottom line is - it's just lighting effect. It's resource-heavy and harder to implement than the previous tech. That's not how you push technological advancement.
Look at electric cars. Same problem.
When you want people to accept a technological innovation, you need to make it
a) cheaper than the alternative, or
b) easier to use
Otherwise nobody will adopt it.
 
  • Like (+1)
Reactions: Imo

Robert Maynard

Volunteer Moderator
How much of the game industry's reluctance to adopt RTX is likely due to the expected hardware ray-tracing support in the next gen consoles (and later AMD GPUs)?
 
I just want to be able to use all of my RTX card, let's get the tensor cores doing something. Cuda cores all busy busy, tensor cores having holidays.

And if it can boost VR performance then many people will be happy as they prob have the new GPU already
 
Back
Top Bottom