Hardware & Technical Does Monitor Size Have an effect on Ship Manoeuvrability?

Any sort of frame rate cap, including vsync, is generally bad for input latency as it increases the time it takes to draw frames.

In games such as FPS shooters I'm always running a frame rate cap at the max monitor refresh rate and G Sync on and I never get any noticeable input lag. I'm sensitive to such lag and notice it at fairly low levels. I haven't run without G Sync and a framerate cap... Might be interesting to do so...

For any game like ED the input lag would really have to be up there to notice it and then only in PvP combat.
 
In games such as FPS shooters I'm always running a frame rate cap at the max monitor refresh rate and G Sync on and I never get any noticeable input lag. I'm sensitive to such lag and notice it at fairly low levels. I haven't run without G Sync and a framerate cap... Might be interesting to do so...

For any game like ED the input lag would really have to be up there to notice it and then only in PvP combat.

Input latency doesn't need to be perceptible to matter; any and every objective reduction could potentially be the deciding factor in a time sensitive contest. The sum of all possible reductions to latency will almost certainly be significant, even if each individual improvement is quite minor.

If had a VRR panel I'd likely be using vsync with a frame rate cap slightly below the VRR limit as this would eliminate tearing while adding almost no latency. However, I don't have a VRR display and since I find the occasional stutter introduced by standard vsync to be more noticeable than the degree of tearing I see with it off, I'm not willing to add any latency for no real improvement to my subjective experience.
 
Input lag is the technical term for it. Lock max fps to 60, and disable vsync in game, you should be fine provided your pc can kick out a solid 60fps. I use a tv on my sim rig as well, it's not ideal but with that adjustment at least I can now shoot. Make sure the TV is in gaming mode as well, to give fastest response.

^This works for me on a 50" 4k. 1080ti Gtx though...
 
Incorrect... If it doesn't affect the game experience or my in game performance it doesn't matter...

It most certainly can affect your performance without you being aware of it and your final experience without you having to know it's the cause.

This is self-evident. If we have the same reaction speed, and we both attempt to launch decisive attacks with hitscan weapons at the same moment, but my setup is 2ms faster, I win. Doesn't matter that neither of us felt any latency. Doesn't matter that neither of us were aware that my setup gave me an advantage.
 
It most certainly can affect your performance without you being aware of it and your final experience without you having to know it's the cause.

This is self-evident. If we have the same reaction speed, and we both attempt to launch decisive attacks with hitscan weapons at the same moment, but my setup is 2ms faster, I win. Doesn't matter that neither of us felt any latency. Doesn't matter that neither of us were aware that my setup gave me an advantage.
That is not necessarily true, there are various factors and mechanisms that can make any latency differences mostly irrelevant. Network latency is likely to be a far bigger concern than any notional display latency for example.
 
Yeah, not buying 2 milliseconds...

What's not to buy? My point is a demonstrable fact as long as time is time.

Any objective difference is an objective difference that could potentially be the deciding factor in a time sensitive action. The odds of it being the deciding factor in any given situation go down as it's proportion of total time, but it will never be zero. 2ms isn't even an insignificant amount of time when total latency of a well tuned system can be in the ballpark of ~30ms or less.

I could have said one picosecond and the main point would stand. Even delays below the granularity of the system timer could still cause an action to fall behind a tick and ultimately lead to very obvious effects.

That is not necessarily true, there are various factors and mechanisms that can make any latency differences mostly irrelevant. Network latency is likely to be a far bigger concern than any notional display latency for example.

Yes, it is necessarily true. If one of two mutually exclusive events that would otherwise have happened simultaneously is delayed at all, then it does not occur. This could amount to a picosecond over a trillion years and it would still hold true.

"Mostly irrelevant" or irrelevant in most cases doesn't contradict anything I've said.

I could shave off twenty more ms just by changing my display, but it wouldn't be worth it to me because it wouldn't matter the overwhelming majority of the time and would almost never be the deciding factor. That doesn't change the irrefutable fact that even a single ms could be the deciding factor in my next fight, or the likelyhood that I've already experienced negative consequences from latency that is well below my threshold of conscious perception (if it happens one time in a hundred, that's ~1k times over ~10k fights).

All other things being equal, any reduction in latency is good and the idea that you have to be able to feel the delay for it to ever matter is lunacy.

Not quite accurate - V-Sync holds up the render pipeline till the buffer it is has written to is displayed. While that can seem to induce "additional" latency depending on various factors it need not. Triple buffering paired with V-Sync can mean that a render pipeline can effectively free run with only the last complete frame being presented to the display for processing (what nVidia refer to as Fast V-Sync mode effectively).

Tearing can induce simulator sickness as well as generally looking ugly and potentially having an adverse effect on accurate targeting.

Simply holding a frame until the next refresh is an actual increase in latency, even if it's a small one. Without vsync you get the new frame, or at least a portion of it, immeditately, you don't have to wait for the next refresh to start. If the new frame shows up half way through the refresh of a 144Hz display, you get the bottom half of that frame ~3.5ms sooner than you otherwise would, even if there is no extra buffering being done.

D3D apps also do not have true triple buffering; they don't get a second back buffer and the most recent frame being queued isn't used. D3D has a FIFO render ahead queue...a deeper queue perforce increases latency.
 
Last edited:
Yes, it is necessarily true.
I have heard this kind of claptrap preached so many times it is unbelievable.

When you consider the average human response times and various other factors what you are talking about is insignificant, especially in an internet networked context. It is the kind of misleading spiel that the snake-oil hi-frequency display preachers like to spout which really has no solid grounding in actual reality (at least where WAN/Internet type gaming is concerned) - a nice spiel but that it is all.

Latency only matters to a point, beyond that point any variance is irrelevant. Even in the cases where latency can be a concern, there are various techniques that can be used to mitigate it.

There are other far more significant reasons why higher frequency and lower latency displays are desirable such as mitigating simulator sickness and allowing for visually smoother fast motion in the virtual gaming/simulation environment.

It should be born in mind that while you are arguing that 2ms is significant in the visual case, it is worth keeping in mind that there is likely to be a minimum of 16ms wire latency (more likely 30+ms in terms of mean latency - and almost certainly with more than a couple of ms span between minimum and maximum) over the internet between clients. In a local wired line case, network latency is less of a concern (<1ms in all likelihood) and so in LAN party case a couple of ms may make some difference but arguably not a significant one in the grand scheme of things.

Also keep in mind the average human response time is not that fast even under optimal circumstances. There are so many variables in-play with a high degree of variance that arguing that 2ms difference in display is petty to the point of being pointless.
 
Here's a simple test that will hopefully illustrate some of my points: https://www.humanbenchmark.com/tests/reactiontime/

Do your five clicks, preferably without trying to game the system by preempting the color change, then look at the difference between your fastest and slowest time. That's your variance. That variance will also remain similar irrespective of total latency...I get about 30ms between by slowest and fastest clicks, even though this is a 300-330ms range on my laptop and a 150-180ms range on my desktop. Chances are that even 2ms (which was an extreme low-ball figure I was using as an exaggerated example) would still count for a small, but significant, portion of that variance.

Now look at the graph and see how people tend to cluster around a small range of latencies. You'd see the same thing if you looked at total system + network latency in a game. Variance would be a bit higher, but probably not enough to make even a 2ms reduction less than a full percent of your variance.

You'll never conciously perceive a 2ms change in latency, probably not even a 20ms change, but such a change to your latency floor could easily alter the sequence of some relevant in-game events.

I have heard this kind of claptrap preached so many times it is unbelievable.

When you consider the average human response times and various other factors what you are talking about is insignificant, especially in an internet networked context. It is the kind of misleading spiel that the snake-oil hi-frequency display preachers like to spout which really has no solid grounding in actual reality (at least where WAN/Internet type gaming is concerned) - a nice spiel but that it is all.

I doubt you've ever read a single one of my posts that you've ever replied to, given the inane assumptions you habitually make about them.

For the record, I moved from ~170Hz CRTs to 60Hz LCDs about a decade ago and am reasonably content with my slow 60Hz VA panel. I'm not trying to sell displays, not claiming I can perceive small changes in latency, not saying latency is the be all and end all to performance, and suggesting people sacrifice what may be more relevant to them to reduce latency they don't notice. I'm just pointing out the easily demonstrable fact and self-evident truth that a sequence of temporally close real-time events can be altered by small changes in delay and that possible variance is far more relevant than total latency.

I'm also not limiting my points to networked multi-player gaming scenarios, but network jitter is also generally a small portion of total network latency.

Latency only matters to a point, beyond that point any variance is irrelevant.

As I've repeatedly stated, the odds of any given portion of total latency, or even latency variance, being the the determining factor will go down proportionally, but it's never going to be zero.

it is worth keeping in mind that there is likely to be a minimum of 16ms wire latency (more likely 30+ms in terms of mean latency - and almost certainly with more than a couple of ms span between minimum and maximum) over the internet between clients. In a local wired line case, network latency is less of a concern (<1ms in all likelihood) and so in LAN party case a couple of ms may make some difference but arguably not a significant one in the grand scheme of things.

Also keep in mind the average human response time is not that fast even under optimal circumstances. There are so many variables in-play with a high degree of variance that arguing that 2ms difference in display is petty to the point of being pointless.

Not something I ever ignored or overlooked.

Average human response time is in the ballpark of 200ms, and even a perfectly tuned system running a low latency game engine at 120fps is typically going to add 20ms or more on top of that, before any networking latency is concerned. I'd be positively astounded if total latency in a typical PvP encounter in ED was less than 500ms on average.

2ms could still be the deciding factor. Indeed once we trim the immutable latencies that everyone is going to be equally subject to, we are left with maybe a range of hundred ms of wiggle room, which would imply that 2ms would be the deciding factor in the ordering of events about 1-2% of the time. Whether that is significant or not in your opinion doesn't change the fact that it's an actual difference.

Being the deciding factor some of the time isn't necessarily the same as being generally significant, but looking at total latency is grossly misleading as most of that latency is a static constant that we have little or no control over.

Personally, I trim latency were ever possible, unless the downsides for doing so outweigh the upsides, and this is almost entirely subjective. I knocked off ~16ms (I'm usually around 120fps in a fight, making each frame of delay I can trim about 8ms) by reducing the render ahead queue from three to one, with essentially zero negative effects (a percent or two loss in frame rate). However, I decided against spending 600 dollars on a new monitor to save another ~20ms (my monitor has ~28ms of input latency at the center of the display, from the sum of processing and pixel response time) because I'm very content with the image and even motion quality of my display (even though the later is both objectively and subjectively worse than many) and even a~20ms reduction wouldn't improve my performance enough in what I do for it to fit my definition of significant.

Anyway, I'm sticking you back on my ignore list because every time I read your responses to my posts I'm have to go back and ponder how they are related, only to invariably come to the conclusion that it's an issue with your interpretation or presumptions and not my post.
 
Anyway, I'm sticking you back on my ignore list because every time I read your responses to my posts I'm have to go back and ponder how they are related, only to invariably come to the conclusion that it's an issue with your interpretation or presumptions and not my post.
Your arrogance knows no bounds - I have read your posts and your insistence that 2ms could be significant is largely smoke-and-mirrors. I never stated you were selling displays but the line of reasoning is identical to those that try to push such displays. Heavily flawed and couched in so much assumptions about how latency in any given area affects the game loop threads. That type of reasoning only really has even an iota of substance of legitimacy in the single threaded domain. Input control, network control, rendering, and entity control loops can all run independently of each other in different threads and with the correct patterns latency factors become largely irrelevant.

As for latency (of the display type we are referring to - Network latency is another matter all together) affecting the order of events - not significantly enough to matter.
 
Last edited:
Back
Top Bottom