New RTX GPUs

of course, for games that use DLSSv4 and new features exclusive to RTX50 series, will be a big improvement.
But elite doesn't have this. Nor will it ever with current ingame engine.
So for us cmdrs there's absolutely no point in touching a 50 series geforce gpu. Unless money is no object. And your just in for the latest trend.
A 4090fe would for me be the pinnacle, especially in my beloved VR.
 
Screenshot 2025-01-13 at 06-17-11 Nvidia confirms UK pricing for their GeForce RTX 50 series G...png

DLSSx implementation in Cmdrs' fave games vs 30/40xx series value? Regardless, for us gamers a Bitcoin 3rd wave crash would be most welcome at some point this year!
 
I'm hoping a 5090 will be at least 2x my 3090 on raw performance because then I might do a boredom purchase. Although £2k for only doubling is probably a bit of a stretch for me.
 
But elite doesn't have this. Nor will it ever with current ingame engine.
I wonder, does DLSS have to be supported natively in the game engine or can it be applied post process similar to what OpenComposite did for FFV. In my small brain, it would be reasonable to render the game at lower resolution and apply DLSS as a post technique but, doubtless I am missing a slew of technical considerations. I've messed with shader code with HLSL before but I've never even thought to tackle DLSS.
 
the 4090 can already handle Elite quite tidily at 4k even at higher refresh rates without an upscaler.

It can, but due to the only really effective AA solution for Odyssey being supersampling, 4k internal resolution is inadequate if one wants to address aliasing.

So for us cmdrs there's absolutely no point in touching a 50 series geforce gpu.

A limited improvement, but not no improvement.

Regardless, for us gamers a Bitcoin 3rd wave crash would be most welcome at some point this year!

Not sure why you'd think a crypto crash would matter.

Bitcoin mining hasn't been profitable on GPUs in a decade and with Ether having moved to proof-of-stake more than two years ago, the sum total of GPU minable crypto market share is small and utterly irrelevant to GPU demand.

GPU prices are high because supply is low and competition almost nonexistent. Supply isn't low because of unusual demand for client GPUs, it's low because it's foolish to build large quantities of client GPUs when datacenter AI accelerators have dramatically higher profit margins. Competition is non-existent because of the concious parallelism that has formed between AMD, Intel, and NVIDIA. They've tacitly agreed to strategically neglect certain market segements to avoid directly competing with each other.

A 2000-2500USD RTX 5090, believe it or not, is barely worth building from NVIDIA's perspective. A B100 chip, which costs about the same to make, is going to go into parts that are sold for upwards of 30k USD and are in such high demand that they have an 18+ month backlog of orders.

The only reason NVIDIA even makes consumer video cards today is that they need to maintain mindshare incase something happens to AI demand.

I wonder, does DLSS have to be supported natively in the game engine or can it be applied post process similar to what OpenComposite did for FFV. In my small brain, it would be reasonable to render the game at lower resolution and apply DLSS as a post technique but, doubtless I am missing a slew of technical considerations. I've messed with shader code with HLSL before but I've never even thought to tackle DLSS.

DLSS is fundamentally an advanced form of TAA. It requires accurate motion vectors to work correctly. Elite: Dangerous doesn't expose these motion vectors natively and while you can add post-process TAA in conjunction with motion estimation shaders, it's performance cost is high and it's quality is low compared to native engine support.
 
DLSS is fundamentally an advanced form of TAA.
I knew this part but... literally I knew this part. That is where my knowledge ends. :)
It requires accurate motion vectors to work correctly. Elite: Dangerous doesn't expose these motion vectors natively and while you can add post-process TAA in conjunction with motion estimation shaders, it's performance cost is high and it's quality is low compared to native engine support.
So does this also mean DLSS is challenging for a scene where a large number of complex objects are moving completely differently to one another and have nothing much else in common in terms of rendering? Because to me, that sounds like an on-foot battle in a Settlement, so is that just basically never going to benefit from DLSS even if FDev tried, and exposed the vectors?
 
Why am I not shocked? Nervidia seem to have this issue on every release.
This time around, it's just a restatement of the same thing: nVidia are not going to devote resources to a silicon market segment (gamers) which doesn't even show up as a rounding error when compared to other segments (cloud provisioning, ML, AI).

nVidia don't have a supply problem (other than the entire world having a supply problem at the cutting edge of 4nm process, because there just aren't that many foundries that can deliver it.)

They do have a problem tying this up to supply the 1% of nerds, who are 1% of the population at most, who have (only) two grand lying around which will impress only 1% of that 1% of the world population at most. They have other fish to fry.

Plus, and I don't think anyone has made this point in the thread yet, it's really hard to build the rest of your rig well enough to saturate a 4090; and then on top of that, it's even harder to create a game engine which will drive that rig hard enough to then drive the GPU hard enough. Obviously whacking up pixel count does help but then you run into the problem that 5k on a 32" already looks like actual paper, so it's another situation where no-one's actually going to buy it because why.
 
Heck, in these reality-leaving-satire-far-far-far-behind-in-the-rear-view-mirror times, if threats of trade wars and actual wars from toddlers handed machine guns come to be, I wouldn't put it out of the question that not only could world prices soon get further inflated by tariffs, but those rare cutting edge foundries, many of which located in... contested areas, could end up under blockade or worse, causing a multi-year cutting edge silicon drought. :p
 
Last edited:
Heck, in these reality-leaving-satire-far-far-far-behind-in-the-rear-view-mirror times, if threats of trade wars and actual wars from toddlers handed machine guns come to be, I wouldn't put it out of the question that not only could world prices soon get further inflated by tariffs, but those rare cutting edge foundries, many of which located in... contested areas, could end up under blockade or worse, causing a multi-year cutting edge silicon drought. :p
You work for Nvidia, are trying to make us all buy 5000 series GPUs, and I claim my £5 reward.
 
So does this also mean DLSS is challenging for a scene where a large number of complex objects are moving completely differently to one another and have nothing much else in common in terms of rendering? Because to me, that sounds like an on-foot battle in a Settlement, so is that just basically never going to benefit from DLSS even if FDev tried, and exposed the vectors?

DLSS is prone to ghosting artifacts in high motion scenes, as are most other temporal solutions. This can be substantially mitigated with proper tuning, but there are usually edge (and occasionally not so edge) cases where the artifacts become obvious.

The net benefit of any particular set of trade-offs is pretty subjective.
 
Heck, in these reality-leaving-satire-far-far-far-behind-in-the-rear-view-mirror times, if threats of trade wars and actual wars from toddlers handed machine guns come to be, I wouldn't put it out of the question that not only could world prices soon get further inflated by tariffs, but those rare cutting edge foundries, many of which located in... contested areas, could end up under blockade or worse, causing a multi-year cutting edge silicon drought. :p
That would only cause a 4nm/3nm drought though, and the world will continue to function on 14nm+++ if it really has to, let's be honest. Last time we had a problem was not on the bleeding edge but rather on embedded silicon that's no smarter than a 6502 and that mistake won't happen again; governments have been quietly establishing sovereign infrastructure these last couple of years and automotive now take JIT risks a lot more seriously than they did five years ago. You can thank fires, earthquakes, the Evergiven, and a pandemic happening all in a row, and to be fair to everyone involved, having those happen four years one after another would not really be on anyone's "likely" risk-scoring card. One has to wonder if a faction has been posting missions on a bulletin board somewhere.

Plus of course if one G7 nation does cut themselves out of the market with tariffs, it de-constrains supply for the other 6, does it not?
 
DLSS is fundamentally an advanced form of TAA. It requires accurate motion vectors to work correctly. Elite: Dangerous doesn't expose these motion vectors natively and while you can add post-process TAA in conjunction with motion estimation shaders, it's performance cost is high and it's quality is low compared to native engine support.
I started to wonder about this but might I humbly submit the challenge that motion vectors must be exposed in some shape or form for the various existing VR techniques for reprojection to be applied. I'm assuming DLSS is just, trying to be, a more clever take on reprojection techniques. From what I seen so far of DLSS4, it does look rather good.
 
Plus of course if one G7 nation does cut themselves out of the market with tariffs, it de-constrains supply for the other 6, does it not?
Depends on how chaotic the fallout gets. :p

Whatever happens, I wonder if I may not find myself still with my trusty 1080Ti for a while longer...

...might I humbly submit the challenge that motion vectors must be exposed in some shape or form for the various existing VR techniques for reprojection to be applied...
I am pretty sure those use frame analysis - essentially the same as when compressing video.
 
That would only cause a 4nm/3nm drought though, and the world will continue to function on 14nm+++ if it really has to, let's be honest. Last time we had a problem was not on the bleeding edge but rather on embedded silicon that's no smarter than a 6502 and that mistake won't happen again; governments have been quietly establishing sovereign infrastructure these last couple of years and automotive now take JIT risks a lot more seriously than they did five years ago. You can thank fires, earthquakes, the Evergiven, and a pandemic happening all in a row, and to be fair to everyone involved, having those happen four years one after another would not really be on anyone's "likely" risk-scoring card. One has to wonder if a faction has been posting missions on a bulletin board somewhere.

Plus of course if one G7 nation does cut themselves out of the market with tariffs, it de-constrains supply for the other 6, does it not?
These Fabs aren't only churning out on the latest and greatest nodes, there is still plenty of demand for devices on older nodes.

If a major state takes themselves out of the supply chain ( they won't unless customers refuse to pay tariffs because they're the only ones directly hit ), the vendor will raise prices and lower manufacturing capacity to protect their margins which are calculated based on volume forecasts, many months ahead. Rule of thumb is that when demand drops, from the outset, so does manufacturing capacity. Why would I scale up if I lost a market? Fab'ing is horrendously expensive. I read somewhere years ago ( needs to be fact checked ), that Fabs are the single most expensive facilities to build and run.
 
Back
Top Bottom