NIS - when NVIDIA fixed what FDev wont.

Yeah guys. If someone confused with slider in NV control panel - it's only for image sharpening.
To enable NIS you need to be in fullscreen mode and choose proper resolution for your native resolution.
 
You're missing the point that was in the part of the quote that you omitted:

"Upscaling is not a solution because it ruins readability of the HUD in cockpits."

Only if the starting resolution of the HUD is too low. You can use the in-game subsampling to keep the HUD at a higher resolution (even native or higher) then upscale further.

Nah, you can just spend a ton of money on a better graphics card.

As of Update 8, EDO isn't meaningfully GPU limited for my better cards and I'm not building a brand new Alder Lake system while DDR5 is still in it's infancy--nor redoing the wiring in the front half of my house to free up enough current to use one without starting a fire--to get another five FPS in CPU limited scenarios.

I do expect to have one of the new AMD V-cache parts in 3-6 months, which might help, but without more improvements on Frontier's end, there isn't really a meaningful upgrade path for this game without delving into the absurd.

DLSS is a technology where the trade-off is fairly simple: huge performance gains for neglible loss, or even sometimes quality gains. It is completely absurd to be 'against' something that does nothing but allow people to enjoy a better balance of performance and quality.

I'm all for options and I will use various forms of upscaling/reconstruction when the situation calls for it, but I still haven't see any situation where DLSS was actually an overall uplift in image quality vs. native. It will selectively replace some elements with higher than native quality ones, but the bulk of the scene, even in the best implimentations, even using the absolute newest DLSS dlls (2.3.4 as of this post), still looks like a blurry mess to me. A small amount of post processing sharpening can balance this out to a degree, but it's still not a convincing replacement for native, and I usually end up turning DLSS off. There is a lot of subjectivity in assessing image quality, but DLSS is objectively not the same as native resolution rendering...there is a distinct loss of fidelity, even if percieved quality is equal, or even superior, to some people.

All of these things have uses, but even the best methods of image reconstruction are still in their infancy, if the goal is to transparently increase performance.
 
There is a lot of subjectivity in assessing image quality, but DLSS is objectively not the same as native resolution rendering...there is a distinct loss of fidelity, even if percieved quality is equal, or even superior, to some people.
While obviously it is 'objectively not the same', I find in some games it already looks better. And I am not the only one, it seems.

But more importantly: there is no combination of settings in, say, RDR2 where I aim for 60FPS without DLSS that looks close to what I can have with DLSS. The entire package with DLSS is simply much, much better when aiming for the same performance.
 
I installed the new nvidia driver and rebootet but cant find the new downscaling option in the nvidia control panel. still only dsr and sharpening available.
 
While obviously it is 'objectively not the same', I find in some games it already looks better. And I am not the only one, it seems.

You're far from the only one, but I'm inclined to think that most people who think DLSS looks better than native are crazy or blind and that Digital Foundry has a rather irrational bias towards NVIDIA in general and DLSS in particular.

I don't play Death Stranding, so I haven't done any comparisons on that title. However, I do have a few other titles that are considered good DLSS 2.x implementations, and I usually end up turning it off, even if I have to reduce other settings to compensate. Cyberpunk 2077 (another title that many people have told me can look as good as native with DLSS) with a DLL swap (which removes almost all of the DLSS induced artifacting), for example...I can run "1440p" ultra with maxed out ray tracing and hold ~60fps minimums on my OCed RTX 3080, if I use DLSS quality...but the game looks considerably better to me, and the the periphery of most any given scene is objectively sharper, if I disable all RT except reflections, turn down the non-RT screen space reflection quality to high, and disable DLSS. It also runs a bit faster...RT burns a huge amount of performance, even on GA102. I can notice the reduction in global illumination quality, if I really look for it, but I notice the blur that DLSS induces everywhere, immediately. It's really a night and day difference, and this is one of the better implementations I've encountered.

But more importantly: there is no combination of settings in, say, RDR2 where I aim for 60FPS without DLSS that looks close to what I can have with DLSS. The entire package with DLSS is simply much, much better when aiming for the same performance.

I've been in the same situation and when judging upscaling methods against each other, a good implementation of DLSS is the best way to do it, if it's available.

It's not the utility of the feature I refute, just the idea that it has no significant trade-offs. A few 16k textures and sharpened lines in an otherwise blurry 1440p render is not universally better than a native 4k image, and I am one of the people that is going to prefer the native image, for the same reasons I disable depth-of-field and motion blur. DLSS or even spatial upscaling can be the best option, even for me, in some cases...but, as of yet, has never come close to fooling me that it's not upscaled from a lower resolution.

I thought all you had to do was say zoom and enhance!

Even CSI's magical depiction couldn't do that in real time.
 
You're far from the only one, but I'm inclined to think that most people who think DLSS looks better than native are crazy or blind and that Digital Foundry has a rather irrational bias towards NVIDIA in general and DLSS in particular.
Thats a pretty bold thing to say, particularly when you follow it up with:

I don't play Death Stranding, so I haven't done any comparisons on that title.
In any case, I'm gonna leave it on. Even if that makes me crazy, blind or burdened with a irrational bias towards NVIDIA. :)
 
Thats a pretty bold thing to say

My claim of bias on DF's part comes predominantly from their methodological failings with their Death Stranding DLSS comparison, where they insist on comparing against a blurry TAA (which DLSS replaces, not supplements) implementation for 'native' (despite TAA not being baked into the game and being quite easy to disable or replace with FXAA), and their initial TAAU vs. FSR comparison in Kingshunt, where they failed to notice that TAAU was disabling DoF, making baseline much sharper, and noticeably influencing image quality.

I don't need to pick apart Death Stranding first hand to know what DF did wrong when they document it themselves. Nor do I think it overbold to be doubtful that DLSS is doing something radically different for Death Stranding, something that it's not doing for any other game, when similar claims of, and similar evidence for, improving quality over native have been rampant about games I do have access to and can see for myself.

Even if by some miracle Death Stranding is such an outlier that it really does look generaly better than native with DLSS, without the caveat that you have to crap all over the native resolution scene with bad TAA first, it would still be an outlier, and not representative of DLSS in general.

Even if that makes me crazy, blind or burdened with a irrational bias towards NVIDIA. :)

No, but thinking my claim is somehow bolder than DF's makes me question whether you're being critical enough of DF's.
 
Just a tip:
Configure Sharpening to 0%.
You'll have a good boost in FPS, and the image it's almost the same.

1637254284644.png
 
Aren't they actually open sourcing this one to work with any card too?
I am not a techie like I used to be (I wasn't really sure what this was) I had to look it up, and I quick scan of the results of the search had several news articles to that effect, with the title being something like "Nvidia Whatamacallittechnology goes Open Source" I didn't really care to read it though, and only looked at the search preview.
 
You're far from the only one, but I'm inclined to think that most people who think DLSS looks better than native are crazy or blind and that Digital Foundry has a rather irrational bias towards NVIDIA in general and DLSS in particular.
You're much more knowledgeable about this kind of stuff than I am, but it just doesn't make sense to me that any down-, up- or sideways scaling technique could possibly produce the same image as native. It's like when they came up with all these gimmicks to "enrich" your crappy 128 kbps MP3s. Although it may sound better, those bits and bytes are lost for all eternity and surely there's no way it can ever sound the same again as the source.
 
I'll give it a "it works" now that I did a quick RTFM and downloaded the updated GeForce Experience.

Been playing Odyssey for a few hours loading some power tat onto my FC for the CG, with it set to the "ultra" equivalent FPS is at least as good as the AMD offering and picture on screen looks good enough to me.
 
My claim of bias on DF's part comes predominantly from their methodological failings with their Death Stranding DLSS comparison, where they insist on comparing against a blurry TAA (which DLSS replaces, not supplements) implementation for 'native' (despite TAA not being baked into the game and being quite easy to disable or replace with FXAA), and their initial TAAU vs. FSR comparison in Kingshunt, where they failed to notice that TAAU was disabling DoF, making baseline much sharper, and noticeably influencing image quality.

I don't need to pick apart Death Stranding first hand to know what DF did wrong when they document it themselves. Nor do I think it overbold to be doubtful that DLSS is doing something radically different for Death Stranding, something that it's not doing for any other game, when similar claims of, and similar evidence for, improving quality over native have been rampant about games I do have access to and can see for myself.

Even if by some miracle Death Stranding is such an outlier that it really does look generaly better than native with DLSS, without the caveat that you have to crap all over the native resolution scene with bad TAA first, it would still be an outlier, and not representative of DLSS in general.



No, but thinking my claim is somehow bolder than DF's makes me question whether you're being critical enough of DF's.
Source: https://www.youtube.com/watch?v=zm44UVR4S9g


All I am saying is that quips like "most [who disagree with me] are blind or crazy or biased" and "[DLSS offers] significant tradeoffs", "night & day difference" and so on, when put into the context of DLSS, seems like corksniffing elitism to me.

To pretty much all gamers the above clearly is a big improvement in performance with extremely comparable visual quality, sometimes with enhanced details. Your take on it simply seems highly hyperbolic to me. It has nothing to do with DF, thats just one outlet that commented on it. Check the comments in the youtube video. There is a reason why TAA is used by most when given the chance, its weird to pretend its a 'methodological flaw' or even a sinister ploy to falsely praise DLSS.

But maybe we are all blind and crazy and biased shills, and you are the only who rightfully points out who horrible it all really is. Dunno. 🤷‍♂️
 
In any case, I'm gonna leave it on. Even if that makes me crazy, blind or burdened with a irrational bias towards NVIDIA. :)
Nobody is perfect 😊

You're much more knowledgeable about this kind of stuff than I am, but it just doesn't make sense to me that any down-, up- or sideways scaling technique could possibly produce the same image as native. It's like when they came up with all these gimmicks to "enrich" your crappy 128 kbps MP3s. Although it may sound better, those bits and bytes are lost for all eternity and surely there's no way it can ever sound the same again as the source.
In theory, an AI can "remember" or "guess" all lost data and restore it on the fly. The question is whether an AI that is so good at restoring details would take up less space on the die than the computational units it replaces.
 
In theory, an AI can "remember" or "guess" all lost data and restore it on the fly. The question is whether an AI that is so good at restoring details would take up less space on the die than the computational units it replaces.
Except that of course a lot of the work isn't done by your individual GPU but by the deep learning system itself. Simply put it is a lot of centralized computational power that provides the efficiency boost on the indivual level. Which is why AMD FSR is not a direct competitor, because without that centralized bit (which is pretty much the heart of DLSS) your only option is some generic 'smart upscaling' technique. Which, while also cool, will never be able to really approximate native.
 
Except that of course a lot of the work isn't done by your individual GPU but by the deep learning system itself. Simply put it is a lot of centralized computational power that provides the efficiency boost on the indivual level.
Indeed, but it doesn't have a direct impact on the user experience. I mean, if Nvidia is willing to spend hundreds of thousands or even millions of dollars to train an AI to learn how to apply high-resolution detail in low-resolution images, that's Nvidia's problem.

Which is why AMD FSR is not a direct competitor, because without that centralized bit (which is pretty much the heart of DLSS) your only option is some generic 'smart upscaling' technique. Which, while also cool, will never be able to really approximate native.
In my opinion, it is more the use than the method that matters. The two technologies share the majority of the uses for which they will be employed (in support of virtual super sampling or compensate for a lack of performance). There are niche sectors (at least for now) in both technologies but they remain niches, insufficient in my opinion not to put them in competition.
 
To pretty much all gamers the above clearly is a big improvement in performance with extremely comparable visual quality, sometimes with enhanced details.

If I had a choice between TAA and DLSS Quality in Death Stranding, I'd take DLSS Quality every single time, at the same output resolution, because it's both faster and looks better to me.

However, pretending that those are the only two options is grossly deceptive. Much of the DLSS improvement isn't coming from DLSS at all, it's coming from DLSS turning off TAA (something that's apparent from the AA comparisons done by DF and others) and I wouldn't be using TAA in Death Stranding, unless I had control over the specific TAA parameters and could tune it to be usable (like I have to do to most UE4 games).

Your take on it simply seems highly hyperbolic to me.

Your interpretation of my take seems hyperbolic to me.

There is a reason why TAA is used by most when given the chance, its weird to pretend its a 'methodological flaw' or even a sinister ploy to falsely praise DLSS.

TAA is used because of a real or perceived lack of other options to mitigate jaggies.

In a game where TAA is both optional and introduces serious motion aliasing while bluring the whole image (common problems with TAA, but they don't have to be as bad as what I'm seeing in these Death Stranding comparisons), using it as the only comparison vs. DLSS is definitely a methodological flaw, weird or not.

I never implied such comparisons were a deliberate ploy, but whatever the reason for doing them, they do obscure the main problem (blur) with DLSS by only comparing it against something that is even worse.

But maybe we are all blind and crazy and biased shills, and you are the only who rightfully points out who horrible it all really is. Dunno. 🤷‍♂️

I didn't call you or anyone else a shill, but if you can't see how comparing a shoddy TAA preset to DLSS is making DLSS look better than it otherwise would, blind seems apt.

The question is whether an AI that is so good at restoring details would take up less space on the die than the computational units it replaces.

NVIDIA had to do something with those tensor cores, and they were going to be there, DLSS or not, because the vastly higher margin workstation parts (built mostly on the same GPUs for reasons of economies of scale) were going to use them for other AI work.

Except that of course a lot of the work isn't done by your individual GPU but by the deep learning system itself. Simply put it is a lot of centralized computational power that provides the efficiency boost on the indivual level. Which is why AMD FSR is not a direct competitor, because without that centralized bit (which is pretty much the heart of DLSS) your only option is some generic 'smart upscaling' technique. Which, while also cool, will never be able to really approximate native.

DLSS 2.0 drops the per-app pretraining, so it is the end-user GPU that is doing most of the relevant work.

Purely spatial upscaling probably won't match temporal upscaling, but where the upscaling work is done is less relevant, and both will ultimately be judged on the end results.
 
TAA is used because of a real or perceived lack of other options to mitigate jaggies.
Which is pretty relevant when it comes to judging visual quality. People use TAA because, despite its performance impact, it improves visual quality. A bit more blurry, but less jaggies, and the trade-off is considered to be worth it.

DLSS vs FXAA: not fair, TAA is so blurry!
DLSS vs FXAA: not fair, TXAA has terrible jaggies!
DLSS vs nothing: not fair, you didnt even use any anti-aliasing technique!

People use TAA because it looks better than FXAA. People use DLSS because it looks better than TAA. When comparing DLSS in motion it is absolutely fair to use TAA.

Anyway, its weekend. Time for some factorio which neither has nor needs DLSS.
 
Top Bottom