You're missing the point that was in the part of the quote that you omitted:
"Upscaling is not a solution because it ruins readability of the HUD in cockpits."
Nah, you can just spend a ton of money on a better graphics card.
DLSS is a technology where the trade-off is fairly simple: huge performance gains for neglible loss, or even sometimes quality gains. It is completely absurd to be 'against' something that does nothing but allow people to enjoy a better balance of performance and quality.
While obviously it is 'objectively not the same', I find in some games it already looks better. And I am not the only one, it seems.There is a lot of subjectivity in assessing image quality, but DLSS is objectively not the same as native resolution rendering...there is a distinct loss of fidelity, even if percieved quality is equal, or even superior, to some people.
All of these things have uses, but even the best methods of image reconstruction are still in their infancy, if the goal is to transparently increase performance.
While obviously it is 'objectively not the same', I find in some games it already looks better. And I am not the only one, it seems.
![]()
Death Stranding PC: how next-gen AI upscaling beats native 4K
The concept of native resolution is becoming less and less relevant in the modern era of games and instead, image recon…www.eurogamer.net
But more importantly: there is no combination of settings in, say, RDR2 where I aim for 60FPS without DLSS that looks close to what I can have with DLSS. The entire package with DLSS is simply much, much better when aiming for the same performance.
I thought all you had to do was say zoom and enhance!
Thats a pretty bold thing to say, particularly when you follow it up with:You're far from the only one, but I'm inclined to think that most people who think DLSS looks better than native are crazy or blind and that Digital Foundry has a rather irrational bias towards NVIDIA in general and DLSS in particular.
In any case, I'm gonna leave it on. Even if that makes me crazy, blind or burdened with a irrational bias towards NVIDIA.I don't play Death Stranding, so I haven't done any comparisons on that title.
Thats a pretty bold thing to say
Even if that makes me crazy, blind or burdened with a irrational bias towards NVIDIA.![]()
I am not a techie like I used to be (I wasn't really sure what this was) I had to look it up, and I quick scan of the results of the search had several news articles to that effect, with the title being something like "Nvidia Whatamacallittechnology goes Open Source" I didn't really care to read it though, and only looked at the search preview.Aren't they actually open sourcing this one to work with any card too?
You're much more knowledgeable about this kind of stuff than I am, but it just doesn't make sense to me that any down-, up- or sideways scaling technique could possibly produce the same image as native. It's like when they came up with all these gimmicks to "enrich" your crappy 128 kbps MP3s. Although it may sound better, those bits and bytes are lost for all eternity and surely there's no way it can ever sound the same again as the source.You're far from the only one, but I'm inclined to think that most people who think DLSS looks better than native are crazy or blind and that Digital Foundry has a rather irrational bias towards NVIDIA in general and DLSS in particular.
there's a manualI did a quick RTFM
My claim of bias on DF's part comes predominantly from their methodological failings with their Death Stranding DLSS comparison, where they insist on comparing against a blurry TAA (which DLSS replaces, not supplements) implementation for 'native' (despite TAA not being baked into the game and being quite easy to disable or replace with FXAA), and their initial TAAU vs. FSR comparison in Kingshunt, where they failed to notice that TAAU was disabling DoF, making baseline much sharper, and noticeably influencing image quality.
I don't need to pick apart Death Stranding first hand to know what DF did wrong when they document it themselves. Nor do I think it overbold to be doubtful that DLSS is doing something radically different for Death Stranding, something that it's not doing for any other game, when similar claims of, and similar evidence for, improving quality over native have been rampant about games I do have access to and can see for myself.
Even if by some miracle Death Stranding is such an outlier that it really does look generaly better than native with DLSS, without the caveat that you have to crap all over the native resolution scene with bad TAA first, it would still be an outlier, and not representative of DLSS in general.
No, but thinking my claim is somehow bolder than DF's makes me question whether you're being critical enough of DF's.
Nobody is perfectIn any case, I'm gonna leave it on. Even if that makes me crazy, blind or burdened with a irrational bias towards NVIDIA.![]()
In theory, an AI can "remember" or "guess" all lost data and restore it on the fly. The question is whether an AI that is so good at restoring details would take up less space on the die than the computational units it replaces.You're much more knowledgeable about this kind of stuff than I am, but it just doesn't make sense to me that any down-, up- or sideways scaling technique could possibly produce the same image as native. It's like when they came up with all these gimmicks to "enrich" your crappy 128 kbps MP3s. Although it may sound better, those bits and bytes are lost for all eternity and surely there's no way it can ever sound the same again as the source.
Except that of course a lot of the work isn't done by your individual GPU but by the deep learning system itself. Simply put it is a lot of centralized computational power that provides the efficiency boost on the indivual level. Which is why AMD FSR is not a direct competitor, because without that centralized bit (which is pretty much the heart of DLSS) your only option is some generic 'smart upscaling' technique. Which, while also cool, will never be able to really approximate native.In theory, an AI can "remember" or "guess" all lost data and restore it on the fly. The question is whether an AI that is so good at restoring details would take up less space on the die than the computational units it replaces.
Indeed, but it doesn't have a direct impact on the user experience. I mean, if Nvidia is willing to spend hundreds of thousands or even millions of dollars to train an AI to learn how to apply high-resolution detail in low-resolution images, that's Nvidia's problem.Except that of course a lot of the work isn't done by your individual GPU but by the deep learning system itself. Simply put it is a lot of centralized computational power that provides the efficiency boost on the indivual level.
In my opinion, it is more the use than the method that matters. The two technologies share the majority of the uses for which they will be employed (in support of virtual super sampling or compensate for a lack of performance). There are niche sectors (at least for now) in both technologies but they remain niches, insufficient in my opinion not to put them in competition.Which is why AMD FSR is not a direct competitor, because without that centralized bit (which is pretty much the heart of DLSS) your only option is some generic 'smart upscaling' technique. Which, while also cool, will never be able to really approximate native.
To pretty much all gamers the above clearly is a big improvement in performance with extremely comparable visual quality, sometimes with enhanced details.
Your take on it simply seems highly hyperbolic to me.
There is a reason why TAA is used by most when given the chance, its weird to pretend its a 'methodological flaw' or even a sinister ploy to falsely praise DLSS.
But maybe we are all blind and crazy and biased shills, and you are the only who rightfully points out who horrible it all really is. Dunno.![]()
The question is whether an AI that is so good at restoring details would take up less space on the die than the computational units it replaces.
Except that of course a lot of the work isn't done by your individual GPU but by the deep learning system itself. Simply put it is a lot of centralized computational power that provides the efficiency boost on the indivual level. Which is why AMD FSR is not a direct competitor, because without that centralized bit (which is pretty much the heart of DLSS) your only option is some generic 'smart upscaling' technique. Which, while also cool, will never be able to really approximate native.
Which is pretty relevant when it comes to judging visual quality. People use TAA because, despite its performance impact, it improves visual quality. A bit more blurry, but less jaggies, and the trade-off is considered to be worth it.TAA is used because of a real or perceived lack of other options to mitigate jaggies.