Odyssey - CPU limited or GPU limited?

Even running off an M.2 NVMe SSD with a 2.4GB/s read speed, alongside 32GB RAM?
Maybe! I'm just speculating here, but even an NVMe is appreciably slower than reading direct from memory. That said, you say you see

just a second or two of 8 or 9 fps
On my system, during the time I had the Odyssey install moved over to magnetic disk, those stalls were 10+ seconds of ~5 FPS. Now that it's back on NVMe (although not as speedy as yours) the stalls are still worse than what you're seeing, but not nearly as bad. Incidentally, more likely than not the memory that matters here is the GPU onboard memory, not the size of main system memory.
 
Maybe! I'm just speculating here, but even an NVMe is appreciably slower than reading direct from memory. That said, you say you see


On my system, during the time I had the Odyssey install moved over to magnetic disk, those stalls were 10+ seconds of ~5 FPS. Now that it's back on NVMe (although not as speedy as yours) the stalls are still worse than what you're seeing, but not nearly as bad. Incidentally, more likely than not the memory that matters here is the GPU onboard memory, not the size of main system memory.
Ahhh makes a bit more sense! Thanks for explaining :)
 
I managed to get min gpu to use all of its power by changing the settings in the appconfg.xml file.

More threads larger render and kernel sizes.

Before the change, my gpu would be used 50% now im using all of its power, especially when use fsr.
Normal was ok
 
last patch improved things abit, but marginal. Its still terrible,and people cant just upgrade to a 3060 ti or 3070 this time.
 
I want to know about benefits from the number of cores. Old threads talked about Horizons using as many cores as you could throw at it. Wondering if their will be any noticeable upgrade in VR from swapping out my i5 8600k to an i9 9900k to accompany my new 3070ti. Pretty much the same chip, same clock speeds, a bit bigger cache, but twice the cores.
I have just changed exactly that. Fired ut up 10 minutes ago. Massive difference in fps in stations on foot. At high settings on 1440 p ultrawide its above 100 all the time.
 
As a comparison I now have Odyssey and completed the intro mission (10-30fps in most places, only dipping to 10+ when the hostiles turned up). It was playable for me to be honest. Now I'm in my actual ship where i left off I'm getting 20-60 FPS in La Rochelle and this is a Vega 10 Ryzen 7 mobile laptop APU :)
 
Duration of the low performance is not at all relevant to my points.

You aren't increasing polling rate to reveal a frame rate drop, but a CPU limitation caused by patterns of CPU load that get averaged away over longer polling rates. If the game engine loads a couple of render/worker threads to 100% for half of the time it takes to issue it's draw calls then falls to near idle for the rest of the time the frame is being presented, that's still a CPU limitation and is still long enough to significantly impact frame rates, even if the average logical CPU load over the span of one full second is far below 100%.

If I play in settlements at 1440p or lower, I am completely and persistently CPU limited, even with a ~5GHz 5800X, and this CPU bottleneck is essentially invisible at typical polling rates.
If micro spikes in thread usage are so fast they almost dont register at all in hardware monitoring, and they are tied to low performance, then it's clearly an optimization issue and not any of our hardware.

Not really sure what kinds of conclusions you are seeking by determining how fast the CPU spikes are.
 
If micro spikes in thread usage are so fast they almost dont register at all in hardware monitoring, and they are tied to low performance, then it's clearly an optimization issue and not any of our hardware.

These are not mutually exclusive things. Bad optimization does not in any way imply that faster hardware cannot improve performance. These are optimization issues that result in greater demands on hardware.

If your GPU utilization is not maxed out, you are almost certainly CPU limited, even if monitoring software with longer polling rates fails to report the true extend of CPU utilization. This can be demonstrated with faster polling rates, or via altering CPU performance. If there wasn't a CPU bottleneck, then swapping out my 3900X for my 5800X wouldn't have resulted in a performance increase, and neither of those CPUs would have out performed my 5820K setup in non-GPU limited areas...but it did and they do, in all non-GPU limited scenarios, even where no single logical core was hitting maximum utilization as reported by more typical polling rates.

Not really sure what kinds of conclusions you are seeking by determining how fast the CPU spikes are.

I wasn't seeking any particular conclusion, the conclusion I came to, as a result of the preponderance of evidence is that the game is essentially always CPU limited when GPU utilization dips, even if that doesn't seem to be the case. The high amount of jitter in CPU utilization explains how things like Task Manager can be reporting low per-core CPU utilization, even in situations where the game is clearly waiting on the CPU.

The assertion that the game isn't CPU limited because whatever software one is using to report CPU utilization doesn't show any logical cores reaching full utilization is fallacious. The CPU limitation is still there, the game is still being forced to wait for the CPU to finish work that could be finished faster, those spikes in load are just being averaged away over long polling rates. This should not be any harder to understand than low total CPU utilization on a multi-core part being a bad indicator of a lack of a CPU bottleneck on a mult-core part--just as lightly threaded loads aren't going to be well distributed across many core CPUs, highly bursty/spiky/jittery/transient loads aren't going to be well distributed through time. A faster CPU will still shorten the duration of those spikes and still improve overall performance, to some degree.
 
Edit: I may have misread the OPs CPU, my bad. Still in general the reminder on requirements may help someone.

I may have a unicorn experience, with all the fancy PC setups reporting issues… but my rig meets closely (it is a tad bit better) the system requirements recommended by Frontier and since Update 6 it runs reasonably well. Still some FPS drops on foot, measurable - not necessarily disturbing while playing.

So I believe the OP may have a good reason to update the CPU as it seems to be far below spec while the GPU is overkill already. I mean, there was a Design goal for FDEV and it materialized into the recommended requirements. Anything below can be expected to run not so well.

Reminder:

Elite Dangerous: Odyssey Recommended Requirements​

Latest Graphic Cards
  • CPU: Intel Core i5-8600 / AMD Ryzen 5 2600
  • RAM: 12 GB
  • OS: Windows 10 64bit
  • VIDEO CARD: NVIDIA Geforce GTX 1060 (6 GB VRAM) / AMD RX 580 (8 GB VRAM)
  • PIXEL SHADER: 5.1
  • VERTEX SHADER: 5.1
  • FREE DISK SPACE: 75 GB
  • DEDICATED VIDEO RAM: 6 GB

I don't think my CPU is all that far below the recommended system. From the page below, there appears to only be a few percent of difference at normal clock, and since the 6700K is evidently very overclockable, it could actually beat out the i5-8600 with some overclocking. (I haven't overclocked mine yet, but I guess I would be willing to consider it if it was going to make a significant difference for EDO.)

https://cpu.userbenchmark.com/Compare/Intel-Core-i7-6700K-vs-Intel-Core-i5-8600/3502vsm477251
 
I don't think my CPU is all that far below the recommended system. From the page below, there appears to only be a few percent of difference at normal clock, and since the 6700K is evidently very overclockable, it could actually beat out the i5-8600 with some overclocking. (I haven't overclocked mine yet, but I guess I would be willing to consider it if it was going to make a significant difference for EDO.)

https://cpu.userbenchmark.com/Compare/Intel-Core-i7-6700K-vs-Intel-Core-i5-8600/3502vsm477251

An i5-8600K (let alone the non-K version) is not a dramatic upgrade over an i7-6700K. 6th through 10th gen Intel Core i parts are all Skylake derivatives with trivial generational IPC increases. Biggest improvements came from more cores, larger caches, and higher clock speeds.

A 6700K, though getting a little lean on core count, is still a very competitive gaming part. Overclocking will help--probably won't be a linear increase, and any uplift obviously will be limited to non-GPU limited areas, but it will an increase none the less. Whether you can OC enough for it to be worthwhile is another matter. An experienced OCer with a solid CPU sample and good cooling could reasonably expect to get a 20% OC out of a 6700K...which might translate into a 10-15% increase in non-GPU limited frame rate in EDO. A first time OCer is probably going to get less, unless they want to risk compromising stability.
 
An i5-8600K (let alone the non-K version) is not a dramatic upgrade over an i7-6700K. 6th through 10th gen Intel Core i parts are all Skylake derivatives with trivial generational IPC increases. Biggest improvements came from more cores, larger caches, and higher clock speeds.

A 6700K, though getting a little lean on core count, is still a very competitive gaming part. Overclocking will help--probably won't be a linear increase, and any uplift obviously will be limited to non-GPU limited areas, but it will an increase none the less. Whether you can OC enough for it to be worthwhile is another matter. An experienced OCer with a solid CPU sample and good cooling could reasonably expect to get a 20% OC out of a 6700K...which might translate into a 10-15% increase in non-GPU limited frame rate in EDO. A first time OCer is probably going to get less, unless they want to risk compromising stability.

Thanks, that's pretty much what I thought. I am definitely NOT an experienced overclocker (have never done it ... just read about it), so I wouldn't want to push it very hard, especially since I would not want to appreciably shorten the life of the processor either. I was hoping I might be able to at least get it up to around 4.6Ghz if this turns out to be something that would make a noticeable difference for EDO. I guess I won't be able to know that until I bite the bullet and buy the game (maybe after I see what the next patch brings to the performance table).
 
I don't think my CPU is all that far below the recommended system. From the page below, there appears to only be a few percent of difference at normal clock, and since the 6700K is evidently very overclockable, it could actually beat out the i5-8600 with some overclocking. (I haven't overclocked mine yet, but I guess I would be willing to consider it if it was going to make a significant difference for EDO.)

I got a noticeable difference in concourse (like 5fps lol) by deactivating - and reactivating the turboboost on my i7-9750h (running in a laptop with a gtx1660ti)
That was (iirc) 40-45 fps with turboboost vs 35-40 fps without turboboost - more specifically 2.6GHz (locked) vs 4+ GHz.
But my laptop was running a bit cooler (without turboboost that it)

In your case, i'm not sure you will see a noticeable difference by overclocking.

Anyway, i7-6700k is still a solid performer.


Edit: that was some updates ago - Update 4 iirc...
 
I don't think my CPU is all that far below the recommended system. From the page below, there appears to only be a few percent of difference at normal clock, and since the 6700K is evidently very overclockable, it could actually beat out the i5-8600 with some overclocking. (I haven't overclocked mine yet, but I guess I would be willing to consider it if it was going to make a significant difference for EDO.)

https://cpu.userbenchmark.com/Compare/Intel-Core-i7-6700K-vs-Intel-Core-i5-8600/3502vsm477251
Sorry but I disagree. Having just put in a 9900k from 8600k at 4.8 the difference is enormous. The cpu you have is 2 cores short for starters of the recommended. The general trend now is more cores so just update to an 8 core plus cpu and you will see differences across the board
 
I upgraded my I7-6700k to an I9-10900f recently and saw a modest improvement in Odyssey, mind you I'm running a 3440x1440 monitor and this has an impact.

In other games I saw more improvements and a lot better 1% lows. Running multiple applications isn't a bother anymore and it was an issue on the 6700k. Don't expect big gains by upgrading, but the games will run smoother.

I would love to upgrade my GTX1080 but that is not an option now.

I went with the F model instead of the K because if you unlock power limits on it then it would be very close to the speed of the K model.

If your CPU utilization is in the high 80's and above then I would suggest that you upgrade.
 
Sorry but I disagree. Having just put in a 9900k from 8600k at 4.8 the difference is enormous. The cpu you have is 2 cores short for starters of the recommended. The general trend now is more cores so just update to an 8 core plus cpu and you will see differences across the board
That's a misleading hyperbole.

You upgraded from 6 cores to 16 cores (hyperthreaded, but still)... and it's a difference you call enormous?
Based on your posts... you were running an 8600k paired with a Radeon 6800xt and you were getting 80fps
Then you upgraded from 8600k to 9900k and you get over 100fps.

I would not call getting from 80 to 100 fps as Enormous. Sure it's an improvement, but anything above 60 fps is good enough (assuming it's stable at 60)

I would call enormous an improvement from 30 fps to 60+
From 60 to 90 is not that much and from 80 to 110 it's even less.
Not talking about the numbers but the perceived improvement in gameplay.

The poster you quote is already running 8 cores (again hyperthreaded) and is wondering if overclocking would mean a significant difference... which would be not.

I upgraded my I7-6700k to an I9-10900f recently and saw a modest improvement in Odyssey, mind you I'm running a 3440x1440 monitor and this has an impact.

That's more like it....
 
Back
Top Bottom