On a semi-related tangent, I'm finding this AMD vs. NVIDIA feature battle both both annoying and ironic.
NVIDIA, instead of refreshing a basic post-process sharpen filter that they already have (it's been burried in GFE for years), chooses to push DLSS...which, in hindsight, has exactly zero purpose other than to give tensor cores something to do.
AMD releases RIS, which is a basic post-process sharpen filter that outdoes DLSS in essentially every meaningful way.
NVIDIA, in their 436.02 drivers, finally refreshes their basic post process sharpen filter to be competitive, making DLSS even more comical, but keeps it part of GFE, which means I'm never going to use it because GFE is total ass.
We also have both companies digging up basic render queue depth settings that both have had for over fifteen years and rebranding them as something special.
AMD never had the setting exposed in their drivers, but their flip-queue size was always adjustable via the registry, or third party utilities. They added some automatic tuning of this a few years ago, and just recently added a toggle to the drivers called 'Radeon Anti-Lag', which, as far as I can tell, just sets the queue to one or zero, which is what I've been manually doing since the Radeon 9700 days in ~2003.
NVIDIA has had a 'max frames render ahead' setting right in their drivers forever, but a few years ago they pulled the option to set it to zero, leaving the 1-4 frames intact (and allowing as many as 8 to be set other ways). Now, with the new 436.02 driver, the feature has been rebranded to 'Low Latency Mode", and now only has three settings: the default 'off' (three frames), 'on' (a single frame), or 'ultra' (the zero frame queue depth they pulled years ago).
I wonder what feature, that has always been there, will see renewed attention next?