Yeah, just that 60+ fps is totally unnecessary. Your eyes are able to see 30-60 fps. For me it seems fluid at 30 fps.
For me, fluidity keeps improving until at least 90 fps, and even for most people there are benefits, of increasingly diminishing returns, to be had
way past that.
13 ms or ~77 FPS according to a 2014 study.
That 13ms mostly comes from the fact that was the lowest duration they could test with the limited hardware on hand...random people were already doing well better than chance at the lowest test duration (13ms) in some of the tests. They could have gone way lower than 13ms before they found the floor where no one was able to extract meaningful information. If they had trained the participants, or featured more simplistic (but still meaningful) tests, the frame duration would have had to be vanishingly short before no one was able to extract useful information.
Anyway, this discussion (and that citation) crop up fairly frequently...
A 3090 should be destroying this game at 4K to be honest. Do people still use 1080p? It's tiny 🙄 Yeah im using very often 1080p to get high fps but always with maxed settings! In this case im using SS 2.0 but the game runs also bad with all the settings down. My 3090 has no chance to handle the...
forums.frontier.co.uk
Maybe you're still missing that maximum image processing speed is an entirely different thing than minimum image frequency for the brain to interprete a sequence of images as fluid movement.
The floor for fluid movement isn't the same as the ceiling for useful motion information, anywhere near the point latency ceases to be an issue. It's even further from where motion blur stops being perceptible, or other undesirable artifacts cease.
True, those results from 2014 merely found an upper bound for this kind of optimization.
They didn't come anywhere near the upper bound of anything other than the 75Hz display they were using.
The thing is that our brain creates in illusion of motion when the images are just fast enough, around 24 fps.
The illusion of motion--where we stop being able to count individual frames--is the easily the
least relevant of any criteria related to frame rate in interactive media.
I'm playing a game where my performance matters. I'm not looking to be fooled into thinking I'm seeing smooth motion at the cost of a loss of detail information. I'm looking for perceptibly perfect smoothness, while maintaining as crystal clear an image as possible, as a baseline. The more motion information I can extract beyond that, the better.
Nowhere is 24, 30, or 60, or even 77 fps ideal for me. For me 30 fps is not even close to smooth, absent blur (and I strive mightily to eliminate blur). 60-70 fps is perfectly tolerable, frame interval consistency permitting, when it comes to delivering what I feel is smooth motion, but I can still benefit from far more.
It also affects your input.
Edit: That's a daft comment, I'll go and play the game for a while..
To make it less daft, latency is a real problem and it's not nice to shoot at people if you can't aim properly..
Yes.
There is a reason phones tend to have high refresh rate displays and even higher input sampling intervals...almost everyone and their grandmothers can detect touch and drag latency (which is even more apparent than, but not so far from mouselook and aiming) problems at very low levels.
Source: https://www.youtube.com/watch?v=vOvQCPLkPt4
The touchscreens we're familiar with on our devices have around 100ms latency from touch to reaction. At 1ms, things feel way more natural - check out this video to see how natural.
blogs.microsoft.com
Back to point, perform isnt consistent in game.
Use if GPU differs
E.g. GPu uses 99 or 98% in stations I managed to get nearly 60 fps at 4k
GpU is being utilised 65% at settlements and FSP is about 45
Weird CPU is no higher than 16% used.
Why is ED not using all of the GPu potential all the time.
Because you are CPU (or memory) bound virtually any time GPU utilization falls below ~99%.
Looking at total aggregate CPU utilization with 1000-2000ms polling rate says almost nothing about how the CPU is being utilized. You need per-core use and much faster polling rates to directly detect CPU limitations. Even then memory bottlenecks can still cause dips in CPU utilization while the GPU is still waiting on the CPU.