The 1080 ti might be overkill for playing ED sans VR, but presumably anybody willing to pony up the bucks for one has other more graphically intense games in their library that would benefit from the extra horse power. I know that I sure do.
ED is as graphically demanding as you care to make it and that Ultra preset isn't anywhere near maxed out.
I have a 60Hz 1440p display and a 1080 Ti that typically runs at ~5% higher core and ~10% higher memory clocks than peak reference boost clocks. I can still see sub-60 fps at times with increased texture, environment, and shadow resolutions; plus additional shadow frustums and some super sampling. This is with DOF disabled (spend 25% of your frame rate to see less clearly!) and some manual (and largely visually imperceptible) tuning down of the ultra particle settings.
An I7 3700 isn't PCI E 3.0 I don't think, this will be your main issue with running a 1080 or higher as it can saturate the bandwidth on PCI E 2.0 slot.
Not even a 1080 Ti is dramatically bottlenecked by PCI-E bandwidth in a single card scenario. Putting mine in my eight year old LGA-1366 system results in maybe a ~10% performance loss vs. my more recent systems, mostly due to the CPU and memory performance, not PCI-E 2.0.
By and large, unless you are VRAM constrained, PCI-E utilization is very low when not using SLI/CFX. Multi-GPU setups need more because they need to maintain coherency of VRAM contents, and all the additional cards need to send output frames to the first card for compositing. Even in these cases, it takes a pretty high-end setup at fairly extreme settings to make more PCI-E lanes or a new PCI-E revision a worthwhile upgrade.
This gets widely tested every few years, and while there are some exceptions, it's generally very hard to tell the difference between PCI-E 2.0 16x, PCI-E 3.0 8x, and PCI-E 3.0 16x, with any single GPU, other things being similar.
Here's TechPowerUp's look at PCI-E scalling on a 1080 (vanilla) from last year:
https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080_PCI_Express_Scaling/
Even going back to PCI-E Gen 1.1 gave ~95% of the performance, on average. Tom's Hardware, GamersNexus, and quite a few independent testers, have similar articles with broadly similar conclusions.
Here is Puget System's dual Titan X (Pascal, the original GP102 part, not the newer, slightly faster Titan Xp) tests:
https://www.pugetsystems.com/labs/articles/Titan-X-Performance-PCI-E-3-0-x8-vs-x16-851/
They had to run triple 4k surround (11520*2160 resolution) to find a major difference between 16x/16x and 16x/8x and 8x/8x in SLI.