So, frame-rates since the patch, how are yours?

I find a highly variable 60-300 fps, for example, fps far more tolerable than even a locked 45 fps, and well superior even to a locked 60 fps. As long as the frame time interval isn't too jittery (losing 200 fps over the course of a tenth of a second is bad, but over a full second, no so much), it's not a stutter, and if I can get a higher frame rate much of the time, so much the better.

Vsync and most frame rate limiters are often poor at stutter control. Yes, they can lock the actual display of frames to a fixed interval and eliminate tearing, but they cannot completely mask erratic present intervals because they generally cannot change when the content of a frame will be rendered, only delay when it's actually drawn to the display. It's for this reason I often found AFR multi-GPU setups, especially on AMD, before everyone started paying attention to frame pacing...'just enable vsync' was no solution to two frames being drawn almost simultaneously, with a 30ms delay to the next pair of frames, because I was seeing ~1-2ms of motion (i.e. almost none) across two 16.7ms refreshes, then 30ms+ of motion change between that second and the third refresh. It was useless; the extra frames were 'real', but they provided zero benefit, and games with this issue felt identical, or even worse, with two GPUs vs. using just one, even if the latter reported half the frame rate.

Considering I get that on an i7-6850K and 1080 (at 1920x1080), I suspect that Odyssey may be auto-tuning its graphics (and we don't have any knobs exposed to adjust that tuning ourselves). Another reason I suspect such is some of the video and screenshots I've seen look far better than anything I've seen on my PC, even when I run it in Ultra (Ultra for out in the black, High for in the bubble (Ultra tanks to 10 fps too often in the bubble)).

I'm quite confident the game isn't auto tuning anything and that the screen shots you mention are either the result of the high-res screen cap feature, areas you haven't experienced, people using custom settings (which can look considerably better than any preset, even without the use of 3rd party shaders or the like), or some combination of these.

Before anybody jumps in with "but I need my 120fps"... you don't really. I'm not going to claim you can't tell the difference between 60fps and 120fps (I do notice something in the difference between 30fps and 60fps, visual clarity?

Assuming input latency isn't an issue (and it often can be, even at relatively high frame rates as many games will buffer several frames...a full three-frame D3D flip queue at 60fps is at least a 50ms delay...which would fall to 25ms at 120 fps and 3ms at a thousand), visual clarity is the main benefit from a higher frame rate (and pixel response times). Some examples:


In the game frame rate simulation, I can read all the names of the units scrolling past on my 240Hz display at 240 fps and I get a similar experience in real games, assuming I disable things like motion blur and tune any temporal AA features to minimize motion artifacts.

Anyway, need is relative. I can muddle through most games at ~25 fps, if I need to, and most games look more than sufficiently smooth to me once I get near 60fps...but I'll never be able to distinguish the trajectory of a projectile, that doesn't leave some sort of tracer, if it's only on screen for one frame, which is a distinct possibility at 60 fps, and making out details in high motion scenes scales at least as far as any refresh rate I've ever seen.

Interesting thread … seems to be massive differences still between CMDRs experiences even with similar CPU / GPU / RAM specs … which leads me to wonder if IO or RAM speed rather than total amount or some other “hidden” limitation is kicking in?

It's not I/O, at least not on any system with an SSD made in the last dozen years, but I can't rule out sufficiently slow memory having a perceptible impact.

Still, I think it's mostly a subjective/perception thing, or more rarely, something that's grossly misconfigured, when someone reports something radically off the median for the hardware involved.
 
Any views on what the current min GPU/CPU is to run the game on maximum settings with 1080p?

Idk the minimum, but I'm using a i7-8700 (it's a 6-core 12-thread desktop CPU) + RTX 2080 laptop with maxed out Ultra settings in native 1080p (no supersampling or subsampling), with smaa enabled.

In Odyssey the uncapped fps is about 280 in open space, 90 to 160 in planetary rings (depending on the number of NPCs and some other enigmatic factors, it's pretty inconsistent), 70 to 140 in station instances (again, very inconsistent), about 160 in fleet carrier concourses (surprisingly) but only 50 to 90 in station concourses and in planetary settlements. Have not tried surface CZs yet (I'm not really intetested in the foot content).

Not counting wacky drops to 30ish fps that can happen anytime while approaching settlements but can totally go away after a relog sometimes.

So I wouldn't say it's unplayable, but it's certainly not awesome either.
 
Last edited:
Any views on what the current min GPU/CPU is to run the game on maximum settings with 1080p?

Last year (Update 7 iIRC), i was able to get 30+ FPS in 1080p running a mix of high/ultra settings with a gaming laptop running I7-9750h (6c/12t) and a GTX1660TI/6gb (that was an average gaming laptop purchased in November 2019)

Now we're at U13, the performance was improved (but not overly dramatically)

However, as the current hardware go, i would not run it on anything less than a RTX3060 (or the AMD equivalent), and i5 at least Gen11 or Amd equivalent (ryzen 5 5000 series?)
 
Just VR can't speak for pancake.
And l dare not mod. I've just stuck with steamVR, windows 11 WMR, and the g2 reverbs high quality settings. 1080p
Dropped steam slider to 150% from 160.
Now getting solid 90fps on foot, space,buggy.
Drops to 70fps at assets.
Reprojection average 11%
 
Well, Ultra, I guess. I suppose anything not dropping below 60fps is decent enough (for me) 😛

Ground CZs at large settlements need a fast CPU and memory to convincingly hold 60fps...something 5800X3D or 12700K+ level, depending on how strict one is about that 60 fps minimum. Some people will claim to hold 60 fps+ with less, but they'll be very hard pressed to show it in those worst case scenarios. You really want the fastest lightly threaded performance you can manage, plus eight physical cores, if possible, and the fastest memory you can justify.

GPU is a bit more forgiving at 1080p and a GTX 1080 Ti, RTX 2070, RTX 3060 (though not the 6GiB variant), RX 5700, or RX 6700 level parts are all plenty for 1080p ultra.
 
Top Bottom