Hardware & Technical Networking settings and these wonderful statistics ...

I generally don't seem to have problems while playing Elite, but occasionally see issues. When I have investigated, they mostly seem to be relatively local problems, based on tracert.

Try playing late at night (like 3 am); does it seem any better? If so, there's likely to be a bottleneck somewhere. Part of the trouble might be the amount of info ED needs to transmit and receive; I suspect it's more than what's required for a FPS.

It's possible the problem is FD's netcode, but it's also possible that it's not.

Without detailed statistics, it's hard to be certain. Just because other games run OK, it doesn't mean that it's a problem that FD can actually fix. That is, it might be due to something outside their control.
 
Last edited:
Funny how I never have any problems like you are describing, if the netcode is as bad as you are stating.

Please, don't. The nature of peering over large distances is something that has been driving networking people crazy just as long as people building stuff over it. Saying "it's never happened, so clearly it's not the netcode" doesn't change that in actual fact it's highly complicated and can be both netcode and environment. In fact, it almost always is.

P2P is actually better for co-op style play purely because it keeps the traffic over pretty much the shortest route possible (versus ping-pong with servers). Rubber-banding is due to positional errors created when clients aren't essentially keeping up with the change rate; this is typically due to packet loss. But there can be other reasons. I have a very fast, low latency fibre link (NBN in Australia) which hits the local Amazon servers in short order. However, I still see massive lag at times entering instances because of the P2P traffic. This is because of the huge amounts of handshaking and connections that go on. Frontier tend to only trigger a lot of this as we hit instance 'cutover' (disengage animation) rather than cycling connections in the background during SC travel to reduce some of that 'bursty' traffic.

Classic example is the combination of a terrestrial base (eg engineers) and a fairly full compliment of commanders occupying pads. Please hold, your peering is important to us.. you are third in the queue.. we are experiencing a higher volume of connections than normal.. you are second in the queue..

Once you are in an instance, typically it's fine; but rubber-banding can and will still happen if the rate of change is simply greater than the clients can keep up. It's increasingly noticeable as the connection counts increase; but that's one of the fundamental flaws to P2P, it relies on border devices, such as our home routers, having enough horsepower to cope with (sometimes) very rapid connection cycling.

Sometimes, it can simply be a very crap CPU/ implementation in the consumer router the player is using, that is simply choking on P2P traffic. They are designed to be cheap, typically, rather than actually quite good at their task. I used to have major issues with Elite, swapping the router out made a huge difference, as was ignoring the uPNP implementation and punching a port-forward, and getting fibre was also a shot in the arm. ;)

I've seen issues in GTA, but even it seems to cope with multiplayer a little better, and that's using P2P + server mix same as Elite, and it's Rockstar, who aren't known for highly stable online experiences. It's complicated to get right. Frontier have been focusing on it of late and that's already had huge impact. I'm hoping for more improvements. I'd like to see them investigate if they can start processing more P2P connections during SC to reduce the up-front burst hitting a (busy) instance, for example, but that may not be possible.

as an aside mm, fibre, it's good for you; can highly recommend. [up]

tl;dr - many factors affect elite and p2p traffic; latency has an effect, but packet loss and poor quality consumer devices are often a major triggers for issues. Frontier also has more work to do.
 
Last edited:
  • Like (+1)
Reactions: NW3
http://fs5.directupload.net/images/170703/7ykpbzht.png

That should be enough for Elite. WAY MORE than enough. It's not like I am downloading Battlefield 1 every time I am connecting to a player.

That depends on how close you are to the test server and if it's representative of connecting to a peer mediated by the ED servers which. last time I did a trace route, I was connecting to servers in Ireland.

I have a fiber optic 150/150 connection which will get 0 ping and 150/170 testing to a good local server, but if I speedtest to Ireland I get 150/68 with a 99ms ping time.

Those tests probably aren't doing too many hops. A few month ago I did a traceroute to the AWS server hosting ED and there were several hops to the server with significant delays. I'm not familiar enough with the P2P architecture mediated by the server to know what the impact of the server communication actually is, but my guess is that the weakest link is what causes the performance baseline.

Interesting stuff, but without examining server logs and client logs of all parties in an instance, it's a pretty big puzzle not likely to be sorted out definitively without access to the data.

I found what I could on my end by opening Task Manager/performance Monitor and looking at the connections associated with the ED processes, then pulled the ip address to the server I was connecting to and did the trace.

Fun way to waste time, the AWS server didn't reply to a ping, but I could see the hops involved to route the request and the delays at each hop.
 
Last edited:
That depends on how close you are to the test server and if it's representative of connecting to a peer mediated by the ED servers which. last time I did a trace route, I was connecting to servers in Ireland.

I have a fiber optic 150/150 connection which will get 0 ping and 150/170 testing to a good local server, but if I speedtest to Ireland I get 150/68 with a 99ms ping time.

Those tests probably aren't doing too many hops. A few month ago I did a traceroute to the AWS server hosting ED and there were several hops to the server with significant delays. I'm not familiar enough with the P2P architecture mediated by the server to know what the impact of the server communication actually is, but my guess is that the weakest link is what causes the performance baseline.

Interesting stuff, but without examining server logs and client logs of all parties in an instance, it's a pretty big puzzle not likely to be sorted out definitively without access to the data.

I found what I could on my end by opening Task Manager/performance Monitor and looking at the connections associated with the ED processes, then pulled the ip address to the server I was connecting to and did the trace.

Fun way to waste time, the AWS server didn't reply to a ping, but I could see the hops involved to route the request and the delays at each hop.

You can also do this to see the IP addresses of other ED players in your instance.
.
 
In fairness FDs servers/networking are awful. If i play ED for a day i will drop to menu at least 8 times. To put that into perspective i also play The Division and Killing Floor, i cannot remember the last time i ever disconnected from a server to the menu (meaning it isn't a poor connection on my end).

Really needs bringing in to this century.

Cannot count the amount of times I've nearly run out of fuel because of disconnected several times in a row jumping to the next system, ship swelling the fuel only to end up back in the system i was already in.

None of these things have ever happened to me. If the netcode or servers were THAT bad this sort of thing would be a huge problem.
 
None of these things have ever happened to me. If the netcode or servers were THAT bad this sort of thing would be a huge problem.

Is for me unfortunately. Its easy to blame someones connection but when ED is literally the only game that disconnects regularly and nothing else disconnects EVER, its fairly obvious where the issue is.
 
I get horrible rubberbanding in some systems, even with this connection:

6428258288.png


If there is a single CMDR in the instance with routing problems, slow connection, packet loss etc - then everyone in that instance suffers.
 
Back
Top Bottom