I find a highly variable 60-300 fps, for example, fps far more tolerable than even a locked 45 fps, and well superior even to a locked 60 fps. As long as the frame time interval isn't too jittery (losing 200 fps over the course of a tenth of a second is bad, but over a full second, no so much), it's not a stutter, and if I can get a higher frame rate much of the time, so much the better.
Vsync and most frame rate limiters are often poor at stutter control. Yes, they can lock the actual display of frames to a fixed interval and eliminate tearing, but they cannot completely mask erratic present intervals because they generally cannot change when the content of a frame will be rendered, only delay when it's actually drawn to the display. It's for this reason I often found AFR multi-GPU setups, especially on AMD, before everyone started paying attention to frame pacing...'just enable vsync' was no solution to two frames being drawn almost simultaneously, with a 30ms delay to the next pair of frames, because I was seeing ~1-2ms of motion (i.e. almost none) across two 16.7ms refreshes, then 30ms+ of motion change between that second and the third refresh. It was useless; the extra frames were 'real', but they provided zero benefit, and games with this issue felt identical, or even worse, with two GPUs vs. using just one, even if the latter reported half the frame rate.
I'm quite confident the game isn't auto tuning anything and that the screen shots you mention are either the result of the high-res screen cap feature, areas you haven't experienced, people using custom settings (which can look considerably better than any preset, even without the use of 3rd party shaders or the like), or some combination of these.
Assuming input latency isn't an issue (and it often can be, even at relatively high frame rates as many games will buffer several frames...a full three-frame D3D flip queue at 60fps is at least a 50ms delay...which would fall to 25ms at 120 fps and 3ms at a thousand), visual clarity is the main benefit from a higher frame rate (and pixel response times). Some examples:
In the game frame rate simulation, I can read all the names of the units scrolling past on my 240Hz display at 240 fps and I get a similar experience in real games, assuming I disable things like motion blur and tune any temporal AA features to minimize motion artifacts.
Anyway, need is relative. I can muddle through most games at ~25 fps, if I need to, and most games look more than sufficiently smooth to me once I get near 60fps...but I'll never be able to distinguish the trajectory of a projectile, that doesn't leave some sort of tracer, if it's only on screen for one frame, which is a distinct possibility at 60 fps, and making out details in high motion scenes scales at least as far as any refresh rate I've ever seen.
It's not I/O, at least not on any system with an SSD made in the last dozen years, but I can't rule out sufficiently slow memory having a perceptible impact.
Still, I think it's mostly a subjective/perception thing, or more rarely, something that's grossly misconfigured, when someone reports something radically off the median for the hardware involved.
Vsync and most frame rate limiters are often poor at stutter control. Yes, they can lock the actual display of frames to a fixed interval and eliminate tearing, but they cannot completely mask erratic present intervals because they generally cannot change when the content of a frame will be rendered, only delay when it's actually drawn to the display. It's for this reason I often found AFR multi-GPU setups, especially on AMD, before everyone started paying attention to frame pacing...'just enable vsync' was no solution to two frames being drawn almost simultaneously, with a 30ms delay to the next pair of frames, because I was seeing ~1-2ms of motion (i.e. almost none) across two 16.7ms refreshes, then 30ms+ of motion change between that second and the third refresh. It was useless; the extra frames were 'real', but they provided zero benefit, and games with this issue felt identical, or even worse, with two GPUs vs. using just one, even if the latter reported half the frame rate.
Considering I get that on an i7-6850K and 1080 (at 1920x1080), I suspect that Odyssey may be auto-tuning its graphics (and we don't have any knobs exposed to adjust that tuning ourselves). Another reason I suspect such is some of the video and screenshots I've seen look far better than anything I've seen on my PC, even when I run it in Ultra (Ultra for out in the black, High for in the bubble (Ultra tanks to 10 fps too often in the bubble)).
I'm quite confident the game isn't auto tuning anything and that the screen shots you mention are either the result of the high-res screen cap feature, areas you haven't experienced, people using custom settings (which can look considerably better than any preset, even without the use of 3rd party shaders or the like), or some combination of these.
Before anybody jumps in with "but I need my 120fps"... you don't really. I'm not going to claim you can't tell the difference between 60fps and 120fps (I do notice something in the difference between 30fps and 60fps, visual clarity?
Assuming input latency isn't an issue (and it often can be, even at relatively high frame rates as many games will buffer several frames...a full three-frame D3D flip queue at 60fps is at least a 50ms delay...which would fall to 25ms at 120 fps and 3ms at a thousand), visual clarity is the main benefit from a higher frame rate (and pixel response times). Some examples:
TestUFO: Framerates-versus
Animation on Blur Busters UFO Motion Tests for testing displays and monitors.
www.testufo.com
TestUFO: Persistence
Animation on Blur Busters UFO Motion Tests for testing displays and monitors.
www.testufo.com
In the game frame rate simulation, I can read all the names of the units scrolling past on my 240Hz display at 240 fps and I get a similar experience in real games, assuming I disable things like motion blur and tune any temporal AA features to minimize motion artifacts.
Anyway, need is relative. I can muddle through most games at ~25 fps, if I need to, and most games look more than sufficiently smooth to me once I get near 60fps...but I'll never be able to distinguish the trajectory of a projectile, that doesn't leave some sort of tracer, if it's only on screen for one frame, which is a distinct possibility at 60 fps, and making out details in high motion scenes scales at least as far as any refresh rate I've ever seen.
Interesting thread … seems to be massive differences still between CMDRs experiences even with similar CPU / GPU / RAM specs … which leads me to wonder if IO or RAM speed rather than total amount or some other “hidden” limitation is kicking in?
It's not I/O, at least not on any system with an SSD made in the last dozen years, but I can't rule out sufficiently slow memory having a perceptible impact.
Still, I think it's mostly a subjective/perception thing, or more rarely, something that's grossly misconfigured, when someone reports something radically off the median for the hardware involved.