Here's a simple test that will hopefully illustrate some of my points:
https://www.humanbenchmark.com/tests/reactiontime/
Do your five clicks, preferably without trying to game the system by preempting the color change, then look at the difference between your fastest and slowest time. That's your variance. That variance will also remain similar irrespective of total latency...I get about 30ms between by slowest and fastest clicks, even though this is a 300-330ms range on my laptop and a 150-180ms range on my desktop. Chances are that even 2ms (which was an extreme low-ball figure I was using as an exaggerated example) would still count for a small, but significant, portion of that variance.
Now look at the graph and see how people tend to cluster around a small range of latencies. You'd see the same thing if you looked at total system + network latency in a game. Variance would be a bit higher, but probably not enough to make even a 2ms reduction less than a full percent of your variance.
You'll never conciously perceive a 2ms change in latency, probably not even a 20ms change, but such a change to your latency floor could easily alter the sequence of some relevant in-game events.
I have heard this kind of claptrap preached so many times it is unbelievable.
When you consider the average human response times and various other factors what you are talking about is insignificant, especially in an internet networked context. It is the kind of misleading spiel that the snake-oil hi-frequency display preachers like to spout which really has no solid grounding in actual reality (at least where WAN/Internet type gaming is concerned) - a nice spiel but that it is all.
I doubt you've ever read a single one of my posts that you've ever replied to, given the inane assumptions you habitually make about them.
For the record, I moved from ~170Hz CRTs to 60Hz LCDs about a decade ago and am reasonably content with my slow 60Hz VA panel. I'm not trying to sell displays, not claiming I can perceive small changes in latency, not saying latency is the be all and end all to performance, and suggesting people sacrifice what may be more relevant to them to reduce latency they don't notice. I'm just pointing out the easily demonstrable fact and self-evident truth that a sequence of temporally close real-time events can be altered by small changes in delay and that possible variance is far more relevant than total latency.
I'm also not limiting my points to networked multi-player gaming scenarios, but network
jitter is also generally a small portion of total network latency.
Latency only matters to a point, beyond that point any variance is irrelevant.
As I've repeatedly stated, the odds of any given portion of total latency, or even latency variance, being the the determining factor will go down proportionally, but it's never going to be zero.
it is worth keeping in mind that there is likely to be a minimum of 16ms wire latency (more likely 30+ms in terms of mean latency - and almost certainly with more than a couple of ms span between minimum and maximum) over the internet between clients. In a local wired line case, network latency is less of a concern (<1ms in all likelihood) and so in LAN party case a couple of ms may make some difference but arguably not a significant one in the grand scheme of things.
Also keep in mind the average human response time is not that fast even under optimal circumstances. There are so many variables in-play with a high degree of variance that arguing that 2ms difference in display is petty to the point of being pointless.
Not something I ever ignored or overlooked.
Average human response time is in the ballpark of 200ms, and even a perfectly tuned system running a low latency game engine at 120fps is typically going to add 20ms or more on top of that, before any networking latency is concerned. I'd be positively
astounded if total latency in a typical PvP encounter in ED was less than 500ms on average.
2ms could still be the deciding factor. Indeed once we trim the immutable latencies that everyone is going to be equally subject to, we are left with maybe a range of hundred ms of wiggle room, which would imply that 2ms would be the deciding factor in the ordering of events about 1-2% of the time. Whether that is significant or not in your opinion doesn't change the fact that it's an actual difference.
Being the deciding factor some of the time isn't necessarily the same as being generally significant, but looking at total latency is grossly misleading as most of that latency is a static constant that we have little or no control over.
Personally, I trim latency were ever possible, unless the downsides for doing so outweigh the upsides, and this is almost entirely subjective. I knocked off ~16ms (I'm usually around 120fps in a fight, making each frame of delay I can trim about 8ms) by reducing the render ahead queue from three to one, with essentially zero negative effects (a percent or two loss in frame rate). However, I decided against spending 600 dollars on a new monitor to save another ~20ms (my monitor has ~28ms of input latency at the center of the display, from the sum of processing and pixel response time) because I'm very content with the image and even motion quality of my display (even though the later is both objectively and subjectively worse than many) and even a~20ms reduction wouldn't improve my performance enough in what I do for it to fit my definition of significant.
Anyway, I'm sticking you back on my ignore list because every time I read your responses to my posts I'm have to go back and ponder how they are related, only to invariably come to the conclusion that it's an issue with your interpretation or presumptions and not my post.