EDO Update9: how is your performance?

update my setting to the one from horizons well the bits i could change, has some of the settings aren't in horizons. But now running like horizons does with ultra to high settings.

upscaling on seems to help with aleast 40-59 Fps depends on where i am.

low foot cz seems to running flawless
high foot cz runs like before a lot of lag in the beginning before it smooths out.
station concourse ok with at around 40 fps.
along with planet walking.

so i concluded that it might be the amount of npc's on the high Foot CZ that causing the lag.
 
update my setting to the one from horizons well the bits i could change, has some of the settings aren't in horizons. But now running like horizons does with ultra to high settings.
Yeah, I now (since update 9) have very similar performance between Horizons and Odyssey in space. The main difference is on the CPU side, as there are still some weird random CPU bottlenecks occurring, that artificially limit the FPS. I use EDprofiler to keep the same graphics settings between the two, and I have found that for my system, if I push the GPU fairly hard (by setting SS=1.25) I get less CPU bottlenecks in ODY and the general difference between ODY and Horizons appears less.

EDIT: i7 4790k / gtx 1080
 
An update here,

made it back home yesterday after 4 months in the UK, where i was able to play on my laptop (2080 max q / i7-8750h /32gb ram / nvme / 144hz 1080p) and played it on my home rig (as per sign here). Night and day difference from when i left home, the game runs smooth and looks great!
 
Yeah, I now (since update 9) have very similar performance between Horizons and Odyssey in space. The main difference is on the CPU side, as there are still some weird random CPU bottlenecks occurring, that artificially limit the FPS. I use EDprofiler to keep the same graphics settings between the two, and I have found that for my system, if I push the GPU fairly hard (by setting SS=1.25) I get less CPU bottlenecks in ODY and the general difference between ODY and Horizons appears less.

EDIT: i7 4790k / gtx 1080

in space, (same position and all - just immediately flipping between horizons and odyssey) i saw for the same quality settings (low and low ....and then ultra and ultra) the same half framerate in odyssey as horizons. I'd get 600 or so fps in horizons, and 300 in odyssey. Then in ultra, horizons would give 300 ...odyssey would give about 150. VR low was similarly, 10ms (90+fps) in horizons, and about 14-15ms for odyssey.

Loading your graphics hardware with just more and more raw data to process by supersampling (especially in VR) is going to begin benching your hardware, and not really reflecting the software in question. so it's entirely expected that you will start seeing closer and closer numbers (depending on how bad the software actually is) when the two pieces of software are doing the same thing.
 
in space, (same position and all - just immediately flipping between horizons and odyssey) i saw for the same quality settings (low and low ....and then ultra and ultra) the same half framerate in odyssey as horizons. I'd get 600 or so fps in horizons, and 300 in odyssey. Then in ultra, horizons would give 300 ...odyssey would give about 150. VR low was similarly, 10ms (90+fps) in horizons, and about 14-15ms for odyssey.

Loading your graphics hardware with just more and more raw data to process by supersampling (especially in VR) is going to begin benching your hardware, and not really reflecting the software in question. so it's entirely expected that you will start seeing closer and closer numbers (depending on how bad the software actually is) when the two pieces of software are doing the same thing.
Not necessarily because of the GPU though ..
In my case I found that there was a CPU limit of about 90fps in ODY ( over 80% average load on the CPU) that I don't have in Horizons (less than 40% load on CPU), but the GPU load was similar between the two (around 80%).

It appears to me that in the cases where people have no improvement in ODY update 9, they may still be CPU limited.
 
Not necessarily because of the GPU though ..
In my case I found that there was a CPU limit of about 90fps in ODY ( over 80% average load on the CPU) that I don't have in Horizons (less than 40% load on CPU), but the GPU load was similar between the two (around 80%).

It appears to me that in the cases where people have no improvement in ODY update 9, they may still be CPU limited.

would really depend on the nature of the process and the change between update 8 and 9. because if the cpu task is limiting framerate and had nothing to do with what changed between update 8 and 9 (existed before and after the same way), then the people not seeing a difference in framerate between update 8 and 9 could be not seeing one for any number of reasons. Perhaps they aren't playing in the place where the update impacted performance. Perhaps they're still hampered by other odyssey performance related issues. Perhaps they're gpu limited based on their settings anyway.

Without knowing what such a cpu task is doing, and if the changes between update 8 and 9 impact it in anyway, it's not really easy to know. And more cpu power may not be the answer for them either - as it could easily be that the task would just consume the extra cycles without improving framerate. It would really depend on what it's actually doing.

It would be rather informative of fdev would come out and say why odyssey needs so much more raw performance to do exactly the same things horizons did. It's not like they dont know after more than half a year. It's gotta be an architectural change in the engine / graphics pipeline for it to remain this way for so long.
 
There is still an occasional CPU limitation (in this case one which is bound by a single thread with extreme transient loads) that's been around since the first public alpha. As other performance hangups start to fall away I'm convinced this is more frequently the cause of performance anomalies, especially on higher-end systems showing reduced GPU utilization.

Of course, there are other situations that may well be CPU limited, with or without the presence of the above. The game seems to be better threaded than it previously was, but also seems to have more overhead that is causing issues with many CPUs that have six or fewer logical cores.
 
There is still an occasional CPU limitation (in this case one which is bound by a single thread with extreme transient loads) that's been around since the first public alpha. As other performance hangups start to fall away I'm convinced this is more frequently the cause of performance anomalies, especially on higher-end systems showing reduced GPU utilization.

Of course, there are other situations that may well be CPU limited, with or without the presence of the above. The game seems to be better threaded than it previously was, but also seems to have more overhead that is causing issues with many CPUs that have six or fewer logical cores.
I'm currently mulling over the idea of upgrading from the i7 4790k to an Alder lake, not that I'm an Intel fan boy, but an Alder Lake motherboard does seem to be relatively future proof and the new AMD MBs are some way off. BUT regarding CPU and number of cores: I could go with the good value for money 6 core i5, or else the i7 12700 (prefer non-k). Games seem to be using more cores recently, so would you say a six core (12 thread) CPU is 'enough' for say a couple of years? (I'm interested mostly in flight sims)
 
I'm currently mulling over the idea of upgrading from the i7 4790k to an Alder lake, not that I'm an Intel fan boy, but an Alder Lake motherboard does seem to be relatively future proof and the new AMD MBs are some way off. BUT regarding CPU and number of cores: I could go with the good value for money 6 core i5, or else the i7 12700 (prefer non-k). Games seem to be using more cores recently, so would you say a six core (12 thread) CPU is 'enough' for say a couple of years? (I'm interested mostly in flight sims)

Six Gracemont (Alder Lake P) cores will probably be sufficient for gaming for a while, especially if supported by E-cores as OSes get better at using them. The issue with trying to 'future proof' an LGA-1700 system is having to choose between DDR4 and DDR5. The former is cheap, available now, and currently not all that much slower in most tasks. DDR5 will, however, quickly outpace DDR4 in capacity and performance over the next few years, but is currently hard to find and extremely overpriced.

Also, there are currently no non-K Alder Lake desktop parts.
 
Six Gracemont (Alder Lake P) cores will probably be sufficient for gaming for a while, especially if supported by E-cores as OSes get better at using them. The issue with trying to 'future proof' an LGA-1700 system is having to choose between DDR4 and DDR5. The former is cheap, available now, and currently not all that much slower in most tasks. DDR5 will, however, quickly outpace DDR4 in capacity and performance over the next few years, but is currently hard to find and extremely overpriced.

Also, there are currently no non-K Alder Lake desktop parts.
Sure, I can wait for january-feb for the non-k parts, and I'm more or less resigned to getting DDR4 ram, probably with a MB that is also DDR5 compatible, to allow an upgrade path. You say "will probably be sufficient for gaming for a while, especially if supported by E-cores as OSes get better at using them", and this is where I am undecided as the current choice is (from memory):
- 6 P cores and no E-cores (i5)
- 8 P cores and 4 E-cores (i7) for about 110-130 euros more where I live

With the 6 P-cores I can also probably forget about win 11 for a while too.
 
The current i5s have four E cores, but the i5-12400 should be P cores only and will likely be a great gaming chip, for the money.

I'm skeptical about the practical utility of DDR4/5 combo boards. These will narrow your choices considerably, while eating into the capacity, and likely peak performance, potential of either standard. If you are considering one of these, just getting a DDR4 board is probably a wiser choice.
 
I'm skeptical about the entire concept of P and E cores.
Especially in the gaming field

Why would anyone want a 8P+4E setup when they can get 8C/16T or even 10C/20T high performance cores?
I mean someone slapping a 250W GFX in a system would not care about any power saving opportunities offered by the E-cores.

I cant stop thinking about the flops in Intel's history - Itanium, P4 Prescott or why not? Amd's Buldozer
 
A lot will depend on the SW and how it uses the cores I expect.
Currently I also doubt that efficiency cores are any use for gaming, but they may be useful for those of us that like to stream or record while we play, or else have browsers active etc etc
 
I will update my comment, yes it's better, yes I get 60fps+ HOWEVRE sometimes I still get a slide show.
Mainly on the ramp of my ship at stations, elevators, moving in and out of buildings, and that is very annoying.
 
I'm skeptical about the entire concept of P and E cores.
Especially in the gaming field

Why would anyone want a 8P+4E setup when they can get 8C/16T or even 10C/20T high performance cores?
I mean someone slapping a 250W GFX in a system would not care about any power saving opportunities offered by the E-cores.

I cant stop thinking about the flops in Intel's history - Itanium, P4 Prescott or why not? Amd's Buldozer

Heterogenous parts are the future and within the next two or three generations only very specialized CPUs will lack low power cores. There is only one die flavor of Alder Lake that lacks E-cores and that's the budget six-core desktop part. On all the other parts the E cores are physically present (even if some parts will fuse them off) and, if used correctly, can offer more aggregate performance than just the P-cores. In properly threaded tasks that can't leverage AVX-512, they also offer more performance for a given die area than the P-cores.

Right now the E-cores can be a liability with schedulers that aren't aware of them or have to use tricks like NUMA to use them. It's frequently better to just disable them, but that will change.

Anyway, someone has to be first, and their presence isn't a disadvantage. Sure, Intel could have omitted them from the larger die and fit another pair of performance cores in there, but that wouldn't help lightly threaded performance and would still result in a part that fell short of the top parts from the competition in multi-threaded performance. The E-cores give the potential for the best of both worlds, and minimize the number of die flavors that have to be built. In the future, multi-chip parts will alieviate that issue, but until then, the number of configurations it's economical to build is limited.
 
Heterogenous parts are the future and within the next two or three generations only very specialized CPUs will lack low power cores. There is only one die flavor of Alder Lake that lacks E-cores and that's the budget six-core desktop part. On all the other parts the E cores are physically present (even if some parts will fuse them off) and, if used correctly, can offer more aggregate performance than just the P-cores. In properly threaded tasks that can't leverage AVX-512, they also offer more performance for a given die area than the P-cores.

Right now the E-cores can be a liability with schedulers that aren't aware of them or have to use tricks like NUMA to use them. It's frequently better to just disable them, but that will change.

Anyway, someone has to be first, and their presence isn't a disadvantage. Sure, Intel could have omitted them from the larger die and fit another pair of performance cores in there, but that wouldn't help lightly threaded performance and would still result in a part that fell short of the top parts from the competition in multi-threaded performance. The E-cores give the potential for the best of both worlds, and minimize the number of die flavors that have to be built. In the future, multi-chip parts will alieviate that issue, but until then, the number of configurations it's economical to build is limited.

Still, why should i choose an Intel with 8p/4e (20t) instead of an Amd with 12c/24t - all cores high performance for the amd (and lets not forget that amd has 16c/32t cpus too)?
Talking strictly about gaming and/or other hpc duties

It seems to me that Intel simply cannot compete with the AMD's superiority in terms of core density and came with a "radical" solution
I'm well aware that ARM SOCs are using this setup of fast+slow cores for quite some time already and it totally makes sense in a mobile (low tdp) market.
But it doesnt seem to make much sense in a gaming/hpc market
 
Still, why should i choose an Intel with 8p/4e (20t) instead of an Amd with 12c/24t - all cores high performance for the amd (and lets not forget that amd has 16c/32t cpus too)?
Talking strictly about gaming and/or other hpc duties

It seems to me that Intel simply cannot compete with the AMD's superiority in terms of core density and came with a "radical" solution
I'm well aware that ARM SOCs are using this setup of fast+slow cores for quite some time already and it totally makes sense in a mobile (low tdp) market.
But it doesnt seem to make much sense in a gaming/hpc market
I wouldn't choose either at present.

If I had to do a build now I would go AMD.

But as long as I have a reasonably decent box at present I would wait for PCIe 5 to mature before selecting components for a new build. Let that play out before sinking money into a new build if possible.

My 5900X build with a 3090 is giving me great performance and is well integrated with known stable high performance PCIe 4 components. It's been the perfect upgrade from my old Core i7-950 box.

I never drop below 60 fps anywhere in EDO and I run EDO on a 38" 3840X1600 screen @ 120Hz and get consistent 120FPS except at the usual EDO problem sites where it can drop to 60 occasionally.

But the sad truth is that it's too expensive for the typical gamer to get acceptable performance in EDO.
 
Still, why should i choose an Intel with 8p/4e (20t) instead of an Amd with 12c/24t - all cores high performance for the amd (and lets not forget that amd has 16c/32t cpus too)?
Talking strictly about gaming and/or other hpc duties

By and large the i7-12700K(F) is midly faster than the Ryzen 9 5900X. The Golden Cove P-cores match or best Zen 3 in IPC and clocks slightly higher, which lets the part win in gaming. The extra Gracemont E-cores are enough to make the i7 a match in well threaded non-gaming tasks, when it would otherwise lose with just P-cores.

It seems to me that Intel simply cannot compete with the AMD's superiority in terms of core density and came with a "radical" solution

They can't and they won't until they reach process parity with TSMC (a ways, but not as far, off as 'nanometer' marketing would imply), or have TSMC build their parts, and adopt MCM (which they are well on their way to doing), as AMD has, but the big/little heterogeneity is not really radical, and is something AMD will be using by Zen 5.

I'm well aware that ARM SOCs are using this setup of fast+slow cores for quite some time already and it totally makes sense in a mobile (low tdp) market.
But it doesnt seem to make much sense in a gaming/hpc market

There is no real gaming CPU market (it's a subset of the consumer market in general) and minimizing the number of die flavors is a major push for all CPU designers, so as few parts can be used in as many different products as possible. If it's going to go in low power systems or laptops, it's going to have E-cores before long, and unless the same parts cannot scale, they will also be found in the highest-end consumer hardware. MCM alleviates this somewhat, but we are still a ways away from being able to subdivide a CPU into granular enough tiles to make it viable to economically tailor parts to niche markets.
 
Back
Top Bottom