was there an optimisation pass at all?

I'm not familiar with that overlay. What are the two "mem" readings under the GPU utilization and temp?
First value is the allocated memory on the GPU memory (4GB available so almost full, if I enable anisotropic, it will fill up memory immediately), the second value is shared memory.
 
Yes. This doesn't strictly mean that something isn't CPU or GPU, just that not all available cycles are being used...something is forcing execution to wait. The video analysis of the alpha that was posted earlier suggests some sort of depth buffer overhead problem, which, from my limited understanding of the potential issues could be responsible for what I've observed.

One thing it's not is an I/O problem; that's negligible.



I tried all sorts of affinity combinations (not allowing the second logical core on a physical core to be used, using one CCX, using one core per CCX, etc and so forth):
Source: https://www.youtube.com/watch?v=TI-QmVKeGiE


It's simply not using all available cycles unless I squeeze all the threads into a very small number of logical processors.

I think the game is waiting around while it spends most of each frame testing and updating the depth buffer. Whatever hardware this is dependent on is evidently becoming bottlenecked before all cycles are used.

Edit:
Source: https://www.youtube.com/watch?v=sZczI65QrbI


Another one, simulating 1440p (with 1080p 1.5x SS) ultra. Approaching the settlement never maxed out any core or the GPU. I thought system memory latency could have been an issue, but the game doesn't seem very demanding in this regard...never going above about 7-8% of the bandwidth HWiNFO can reach in bandwidth limited tests. So, I'm inclined to suspect it's the GPU's caches/memory that's being hammered. GPU memory controller usage only gets to ~50%, but thats still very high for a game and I don't think it includes cache usage.

Videos are still processing, so they are hard to see right now, but they'll eventually be 4k60 when YouTube gets around to it.
Thank you for such in depth testing
 
I also don't understand why there's "shared memory" with the GPU, but there's a lot of things I don't understand about Odyssey.

The driver/WDDM can use system memory for assets that don't fit in local VRAM. Having to wait for that stuff to be accessed over PCI-E is much slower and could explain some of the low frame rate and low GPU utilization you saw.

It's hard to tell how much VRAM something actually needs (vs. what it would like to allocate), but 4GiB is evidently not enough for EDO at the settings you were using.
 
Well, I don't know. 144fps in space (my screen limit) but in ports / settlements I have like 24 fps which is quite unplayable for combat.

Yes, I have everything on ULTRA, 1.0 supersampling, 1.25, 1.50 doesn't make too much difference performance-wise.

AMD 5900X, 32GB of RAM + 2080 SUPER.

I was expecting something way more playable.

A shame.
 
The driver/WDDM can use system memory for assets that don't fit in local VRAM. Having to wait for that stuff to be accessed over PCI-E is much slower and could explain some of the low frame rate and low GPU utilization you saw.
As I eluded to earlier, Nvme 2.0 Gen-4 with peak reads of 7,100MB/s and writes of 6,600MB/s seems to help with this bottleneck.

Maybe it's enough that I don't see a drop in fps. I'm happy with EDO. It could be a-lot better in performance no doubt.
 
i7-8700, GTX-1080TI.
was getting low FPS, then on another post I read that deleting your Graphics config in C:\%LocalAppData%l\Frontier Developments\Elite Dangerous\Options (Backup in case) which gets rid of old shaders then running EDO and resetting your graphics options. I tried and my FPS doubled., also changing SuperSampling from SSAA to MLAAx4 increased more.
Now running on Ultra @ 2560x1440 with FPS in concourse @ 55-75.
 
As I eluded to earlier, Nvme 2.0 Gen-4 with peak reads of 7,100MB/s and writes of 6,600MB/s seems to help with this bottleneck.

Maybe it's enough that I don't see a drop in fps. I'm happy with EDO. It could be a-lot better in performance no doubt.
The thing is I wonder the age of some of these installs where people are having massive Frame issues. I wonder what processes they’re running in the back ground, or if they’ve set the gpu to Max performance in the nvidia settings. Because unless there’s some hyperbole from some users, I’m getting decent frame rates from my system ryzen 5 2600 2070 super at 1440p which is 50-70 on planet surfaces and near 200 in space and 45ish in station. I don’t think it’s just FDevs fault. Sure it could be more optimised but it’s launch time etc
 
i7-8700, GTX-1080TI.
was getting low FPS, then on another post I read that deleting your Graphics config in C:\%LocalAppData%l\Frontier Developments\Elite Dangerous\Options (Backup in case) which gets rid of old shaders then running EDO and resetting your graphics options. I tried and my FPS doubled., also changing SuperSampling from SSAA to MLAAx4 increased more.
Now running on Ultra @ 2560x1440 with FPS in concourse @ 55-75.
One other thing I did was to copy the GraphicsConfiguration.xml from Horizons to Odyssey to see if the gamma and shaders worked better. I'm pleased, but be sure to back up the one in Odyssey before you overwrite it in case it does something bad you don't like.
 
The thing is I wonder the age of some of these installs where people are having massive Frame issues. I wonder what processes they’re running in the back ground, or if they’ve set the gpu to Max performance in the nvidia settings. Because unless there’s some hyperbole from some users, I’m getting decent frame rates from my system ryzen 5 2600 2070 super at 1440p which is 50-70 on planet surfaces and near 200 in space and 45ish in station. I don’t think it’s just FDevs fault. Sure it could be more optimised but it’s launch time etc

That's some terrible performance for your rig though.
 
I was getting 60+fps everywhere with a not-quite-as-good rig although it dipped to 45 in the station after I faffed around with my appearance in the holo-me. All that lipstick is costing frames.
 
As I eluded to earlier, Nvme 2.0 Gen-4 with peak reads of 7,100MB/s and writes of 6,600MB/s seems to help with this bottleneck.

Disk I/O is negligible except for initial loading and major transitions. 'Shared GPU memory' is system memory and doesn't touch any drives. If it did performance would still be horrible because both the GPU's PCI-E link and system memory bandwidth are already far in excess of ~7GiB/s you can get out of the fastest M.2 SSDs in any remotely recent system.
 
I've logged out with a comfortable 144 vsynced FPS in Horizons and logged back into 70 FPS in Odyssey in the same place. Very. Very. Sad.
I'll add to the ol' pile. Same here.

I used to play Elite at ultra at 144FPS, in Odyssey it will hover between 25-60 FPS on medium at 1080p in a lot of spots. Deep open space, or open areas devoid of anything, like me, an SRV, and barren wastes will yield about 60-90 average. (it fluctuates a lot, not steady at all like Horizons, which, sat at 144 everywhere) I've considered setting it to low just to maintain 60FPS when I do anything outside of staring into the black... I used to play on ULTRA... at 144. Everywhere.

God forbid I look at a fire and watch my game crash from slowdown, lol. Worst offender is stations. Goes down to the 20's in some places there.

Remember, a 1060 is "recommended" not even minimum. They recommend running this on a 1060.

i7-8700K Overlocked to 5Ghz per core
RTX 2070 Super
32GB of DDR4 RAM at 3200mhz
M.2 NVMe
 
Disk I/O is negligible except for initial loading and major transitions. 'Shared GPU memory' is system memory and doesn't touch any drives. If it did performance would still be horrible because both the GPU's PCI-E link and system memory bandwidth are already far in excess of ~7GiB/s you can get out of the fastest M.2 SSDs in any remotely recent system.
This. Disk read speeds are pretty negligible to overall performance once initial loading screens are done. Once you're ingame and playing, it's far more dependent on RAM. So no, an old or slow disk drive is not going to cause massive fps fluctuations.

Remember, a 1060 is "recommended" not even minimum. They recommend running this on a 1060.

It's even funnier considering they increased the recommended spec from Horizons thinking that would cover their butt. And yet here we are with 20 AND 30-series Nvidia cards getting absolutely blasted with low fps.

Never mind recommended spec. a 1060 should be classified as "the minimum to at least get video output" for Odysey.
 
I can independently confirm, after proceeding methodically through every graphics setting, that changing anti-aliasing to MLAAX4 seems to have had a marked improvement on my framerate at least. It was interesting to read that others have had the same experience. Does MLAAX4 push anti-aliasing onto the CPU?

I can now run at 1440p on High w/ Ultra terrain and get 60 FPS (VSync) in space and ~40 - 50 on the surface - occasionally dipping a little lower in station concourse (but staying above 30). Not brilliant - but MUCH better. I haven't tried a Frontline combat mission yet - but I have done a few scavenger clearances with 15+ NPC's and they were perfectly playable (staying above 30 FPS).

I've switched CPU / motherboard / RAM between the Alpha and release (now on an i9-10850k w/ 32Gb RAM) but I'm stuck on an 1660 Ti for the time being. Before making the MLAAX4 change this evening I was regularly hitting the 22 FPS floor that I was getting on my i5-2500K (with huge overclock) with the i9 - and in 1080p!

What's with the flickering starfield though - it's driving me nuts!
 
Back
Top Bottom