running out of ammo several times
I use the mining laser as much as possible; it's actually proving to be pretty potent at lower levels. Ammunition is apparently weightless, so there are no downsides to saving it for when you think you need it, or just want to go on a killing spree.
Also, the game has that silly each shot from a semi auto weapon does three times the damage as one from a full auto weapon...so I'm using all semi auto weapons.
Okay following the recommendations the above video i was able to get a rock solid 30 fps at 90% internal res.
If a certain setting seems to stand out with regards to performance impact, you can edit the preset ini files to tune things further.
What i find interesting is 4090 neck to neck with 7900XTX at all resolutions.
If that's understandable at 1080p, i find it a bit baffling at 1440p and 4k
I remember you saying that SF is very GPU limited
It's certainly GPU limited on my systems (5800X + 6800XT and 5800X3D + RTX 4090, both aggressively tuned) in the areas I've been so far, at the settings I've chosen to use (4k ultra, FSR w/65% render scale on the 6800 XT, DLLS w/80% on the RTX 4090), but that may not be the case in large cities, like the one used for GNs tests. I do come quite close to being CPU limited in certain settlement type areas and I suspect my 5800X3D will be at it's limit in Atlantis, if I don't shift the bottleneck back to the GPU by running a higher render scale.
Regardless, the 7900 XTX is not slow. It has more enabled ROPs and typically runs at similar clock speeds relative to the 4090. There is clearly more to performance, even effective fill rate, than pure clock speed vs. functional units, but when comparing GPUs of similar eras where all the tricks and optimizations roughly balance out, the theoretical figures explain a lot. AMD's D3D12 drivers are also lower overhead, which is why it's not uncommon to see AMD pull a head at lower resolutions or in otherwise CPU limited scenarios. It still loses, often by a decent margin, to the RTX 4090 in most titles, but there are numerous exceptions, especially among AMD sponsored titles (where devs are surely encouraged to do, or were already doing, things that show AMD's contemporary hardware in a positive light), and
Starfield is one of them.
Starfield seems to be predominantly fillrate/fragment limited, and just looking at the power and temperature of my RTX 4090 while I'm playing reveals this. In GPU limited areas of shader heavy titles (like starport concourses in Elite: Dangerous Odyssey), my card, at the clocks and voltages I'm using above, will pull in excess of 600w and the GPU core/edge to hotspot temperature delta will push 20C. In Starfield, reported GPU utilization is also maxed out (every cycle is being used), but the card only pulls ~400w and the core/edge to hostpot delta is only about 12C...much of it's FP32 shader hardware is clearly underutilized.