Odyssey - CPU limited or GPU limited?

I would not call getting from 80 to 100 fps as Enormous. Sure it's an improvement, but anything above 60 fps is good enough (assuming it's stable at 60)

As someone who has run 144hz monitors for years I can tell you there is game impacting visual performance that is noticeable anytime you can run at 100 fps, even in Odyssey. Lines are thinner and don't expand when you move.

There is a blurriness that gradually drops away the further you get from 60 fps. Years ago when playing Battlefield 4 there was a map that has an area sectioned off with bars like a jail. At 60hz, when you sat still every thing looked normal, but when you moved, the bars thickend up so much that they obscured what was behind it.

Increasing the frame rate gradually reduced the tendency to thicken and by 100 Hz it was gone. Every thing was crystal clear and the the gameplay was very smooth.

EDH won't really show a difference, but EDO, because it has first person based gameplay, will.
 
As someone who has run 144hz monitors for years I can tell you there is game impacting visual performance that is noticeable anytime you can run at 100 fps, even in Odyssey. Lines are thinner and don't expand when you move.

I run a laptop with 144hz display for almost 3 years.
30fps is playable, but is noticeable bad.

60 is waaay better
80-100 still better than 60, but way less noticeable.
100 vs 144? even less.

At least that's how it is for me.
 
I have played Elite on a few machines and found that Odyssey runs the gpu 100% on all of them.
I would hang fire until they get it optimised as currently it's pretty dire especially on the ground.

Anyone else getting a bit tired of this beta test?
 
Sorry but I disagree. Having just put in a 9900k from 8600k at 4.8 the difference is enormous. The cpu you have is 2 cores short for starters of the recommended. The general trend now is more cores so just update to an 8 core plus cpu and you will see differences across the board

The difference between an 8600K and a 9900K is much larger than the difference between a 6700K and an 8600K.

ED will spawn and load a lot of threads, but it's only going to be maxing out two or three. 4c/8t vs. 6c/6t, is mostly a wash. 8c/16t is a big improvement over either and there are other relevant factors to performance beyond core count. Cache is a big one; the 6700K has 8MiB of L3, the 8600K has 9MiB, while the 9900K has 16MiB.

All other things being equal, EDO will like a few more logical cores than a 6c/6t or 4c/8t CPU can provide, but taking only core count into account, or assuming it was the key difference in the uplift you saw, can be misleading.

I would not call getting from 80 to 100 fps as Enormous.

That's a 25% increase. It's quite large for a CPU upgrade and more than enough to be immediately noticeable. It's also the difference between 48 and 60 fps in a CPU limited scenario.

OP wouldn't see that kind of increase from a modest OC, but it would be something and might even be enough to be the difference between acceptable and not in edge cases...it was when I swapped out my 3900X for my 5800X.

The poster you quote is already running 8 cores (again hyperthreaded)

Logical cores derived from SMT are not equivalent to physical cores. A 4c/8t part is going to be very roughly equivalent to 5.5 cores without SMT in total aggregate performance, depending on workload. That said, even EDO doesn't really start to chug from purely low core count until a little below that (I only really started noticing serious performance drop offs as I was disabling cores once I got down to 4c/4c or 3c/6t...but this was in an area without much AI activity).
 
That's a 25% increase. It's quite large for a CPU upgrade and more than enough to be immediately noticeable.

Well, yes. But that gain came not only because of the CPU upgrade.
I'd say that a good deal, if not most, of the gain came from unlocking the potential from that 6800XT it was paired with.

Logical cores derived from SMT are not equivalent to physical cores. A 4c/8t part is going to be very roughly equivalent to 5.5 cores without SMT in total aggregate performance, depending on workload.

I know very well that hyperthreading does not double the processing power.
And I'm tempted to say that i7-6700k (4c/8t) is pretty well matched by the i5-8600 (6c/6t) - ofc, with no o/c on the k part.

Anyway, my trigger was the bombastic claim that upgrading the cpu alone yielded enormous fps increase. It's not that enormous and not due to the cpu alone.


So imo currently it's hard to say that EDO is gpu or cpu limited.
I'd say it's unoptimized limited so it really needs top cpu and top gpu to run good enough at 1440p extra wide or at 4k

Let's see what Update 7 brings on the table (and the recently dev update 3 shows that an Update 8 is the works too - most probably we will have it in November if the current trend stays.
 
Well, yes. But that gain came not only because of the CPU upgrade.
I'd say that a good deal, if not most, of the gain came from unlocking the potential from that 6800XT it was paired with.

Same thing.

There is always a bottleneck somewhere. If the CPU was the only meaningful change, the entirety of the observed performance uplift came from the CPU, which could only happen if the GPU had untapped potential. Likewise, a faster GPU is only going to increase performance where the CPU isn't completely tapped out on those cores handling the render threads that issue draw calls and where there are no other dependencies that aren't being satisfied on time.

So imo currently it's hard to say that EDO is gpu or cpu limited.

It depends on the scene and hardware involved. EDO is essentially always GPU limited for me at 4k ultra...I have great difficulty finding any scene where GPU utilization falls below 99% for more than a split second (loading pauses). At 1440p ultra, it's more often GPU limited, but there are definitely areas/views that clearly illustrate a CPU limitation (where frame rate scales with CPU clock, but not GPU clock) and GPU utilization falls. At 1080p my usual system is more CPU limited than not, with the GPU rarely being fully loaded.

I'd say it's unoptimized limited so it really needs top cpu and top gpu to run good enough at 1440p extra wide or at 4k


EDO certainly becomes bottlenecked at lower frame rates than would be expected, which is where the lack of optimization reveals itself.

It's also highly variable, from scene to scene or activity to activity where the bottleneck is. Horizons was much more commonly GPU limited with setups that would be considered well balanced in other games. I have ten year old CPUs that aren't really ever the limiting factor in Horizons, even when they are paired with an RTX 3080. Few other games are ever as CPU dependent as Odyssey seems to get, excepting real-time strategy titles or some sandbox games.

It's still not terribly difficult to isolate where the limitation predominantly is, within a given scene, in EDO. If the GPU isn't fully loaded, it's not the GPU. If it's not the GPU, it's much more likely to be the CPU than anything else, or at least predominantly the CPU, even if some measurements fail to reveal a CPU bottleneck directly. As you know all-core utilization figures are misleading and even per-core utilization can be obfuscated by excessively long polling intervals. The only surefire way to unequivocally demonstrate a CPU limitation is to manipulate CPU clock and see how it affects frame rate. If there is no CPU bottleneck, there will be no change in frame rate.
 
The only surefire way to unequivocally demonstrate a CPU limitation is to manipulate CPU clock and see how it affects frame rate. If there is no CPU bottleneck, there will be no change in frame rate.
On my laptop - i7-9750h (6c/12t), GTX1660ti, 16gb ram (2666, CL15), playing only in 1080p - where do you recon the bottleneck is?
 
Last edited:
I've seen my CPU throttle back on clocks dynamically in game because of lack of load.
that's when FPS is 250ish
Id have to say the GPU is always under more load in this game even just looking at average temps between the two.

If i needed to choose 1 thing to upgrade on a budget for most games its usually GPU this isn't much different.
 
On my laptop - i7-9750h (6c/12t), GTX1660ti, 16gb ram (2666, CL15), playing only in 1080p - where do you recon the bottleneck is?

Going to depend on scene. Given how variable the clocks of most mobile CPUs are, I don't think I could even hazard a guess, even knowing the specific scene, unless it was at one extreme or the other; multiple layers of transparency (starport concourses) are almost always GPU, but huge numbers of unculled vertices or a lot of simultaneous NPCs are usually CPU.

Assuming your laptop's CPU isn't already severely power or temperature limited, you can cap peak CPU frequency in the power management options to test for a CPU bottleneck.

Id have to say the GPU is always under more load in this game even just looking at average temps between the two.

This isn't representative of much, as any CPU with sufficient cores is only going to be significantly loaded on a handful of them, even in a completely CPU limited scene. It also assumes roughly equivalent thermal behavior relative to load, which may not always be the case.
 
Going to depend on scene. Given how variable the clocks of most mobile CPUs are, I don't think I could even hazard a guess, even knowing the specific scene, unless it was at one extreme or the other; multiple layers of transparency (starport concourses) are almost always GPU, but huge numbers of unculled vertices or a lot of simultaneous NPCs are usually CPU.

Assuming your laptop's CPU isn't already severely power or temperature limited, you can cap peak CPU frequency in the power management options to test for a CPU bottleneck.

This is the original post:

to recap:

EDO, concourse

GPU utilization 70-100%
CPU utilization 30-60%

The GPU was running no higher than 1755 GHz, but most of the time at 1700GHz or slightly under (that's still over the base frequency of the GTX1660TI)
The CPU was running variably 4-4.3 GHz

After i capped the CPU to 2.6GHz, the GPU was running steady over 1800GHz
Still, i lost about 5 fps (10-15%) from the 40-45 fps i was getting at the time in Concourse.

And for a while i did run like that since the laptop was cooler and less noisy.
 
This is the original post:

to recap:

EDO, concourse

GPU utilization 70-100%
CPU utilization 30-60%

The GPU was running no higher than 1755 GHz, but most of the time at 1700GHz or slightly under (that's still over the base frequency of the GTX1660TI)
The CPU was running variably 4-4.3 GHz

After i capped the CPU to 2.6GHz, the GPU was running steady over 1800GHz
Still, i lost about 5 fps (10-15%) from the 40-45 fps i was getting at the time in Concourse.

And for a while i did run like that since the laptop was cooler and less noisy.

Perforce, if running the CPU faster improves performance, you are at least partially CPU limited. In scenarios where the CPU is not the limiting factor, changing CPU performance does nothing to frame rate, in and of itself, unless you reduce CPU performance so much that you become CPU limited. This is the status quo in Horizons, with almost any high-end GPU.

However, running on laptop components with tight power and cooling budgets adds another element of variability that can be hard to account for. The GPU can be throttled without reducing reported utilization or can report reduced utilization while boosting higher than it needs to if the hysteresis values (in either magnitude or duration) for a given performance cap haven't kicked in to reduce clock speed.

What does GPU-Z, HWiNFO, or MSI AB say about the "perfcap" or "Performance Limit" reason for the GPU?

Ideally, it's 'none' ('0') and the GPU is boosting as far as it's allowed to, because it's actually fully loaded and not limited by temperature, power, or voltage caps. In practice, this is rarely seen without a custom OC on desktop parts with ample cooling and generous power budgets. You will probably never see 'none' on a modern laptop, no matter what you do to it, but as long as it doesn't say 'utilization' then the GPU utilization figure should be more or less accurate for the clocks given.

In the situation in your prior post, I suspect you were more, or closer to being, CPU limited than you realized. Allowing the GPU to boost higher at the cost of CPU clock hurt performance because you reduced CPU performance by so much you became predominantly CPU limited in the process. GPU load could still have been sufficient or intermittent enough to keep the GPU clock boosted even without the game being able to consistently leverage it.

You should be able to undervolt your laptop's 1660Ti the same way you'd do so for a desktop card, tightening up it's boost behavior while keeping the same or higher average clocks, allowing the CPU to stay near peak boost without limiting GPU clocks as much.
 
Last edited:
Again, given that most other games don't need to get this granular about CPU and GPU usages when determining performance expectations, this is clearly still an optimization issue, so again I fail to see what upping polling rates and identifying bottlenecks is realistically going to do to help anything.

Not to mention that a CPU core doesn't need to even be maxed out to experience CPU-related performance drops; if the code is incredibly inefficient, it will still reduce performance because the cycles are essentially being wasted performing calculations it doesn't need to perform, or if certain things take way longer than they should to process. Some of the worst performing early access titles exhibit this behaviour, such as the much-maligned Yandere Sim; it is so extremely bogged down by inefficient code that framerates slow down to an absolute crawl despite the fact no part of the CPU is anywhere clode to being maxed out. It is slowed down purely by inefficient game code.

That's basically what's happening to EDO, and no amount of granularity of polling rates is going to identify the issue for end users. Only FDev can realistically identify it.
 
I fail to see what upping polling rates and identifying bottlenecks is realistically going to do to help anything.

It's the first step in doing what's practical to alleviate said bottlenecks and best optimize the performance of the game we have. It's also going to prevent time and resources being spent on solutions that won't address the problems being observed.

For example, if I hadn't looked into things further when I saw results that were, at first glance, a contraindication to a CPU bottleneck (namely no logical core over 75-80% load according to reporting software at default settings), I never would have swapped to the most optimal CPU I had for this game and would have left a fair bit of potential performance on the table. I was, in fact, significantly CPU limited in quite a few areas, and moving my CPU with the best per-core performance, to the system I normally ran EDO on was a big help.

Only FDev can realistically identify it.

I can't rewrite the game, but I absolutely can identify where it's getting hung up and what changes can be made to improve performance. I have done so on my systems and the game runs appreciably better than it would if I simply dismissed everything as being out of my hands.

If you don't think the improvements you can eke out are worthwhile, that would be one thing, but you seem to be operating under some bizarre and fallacious assumption that an "optimization issue" on the part of FDev's code somehow implies that nothing else matters, or that there is nothing that can be done to get this poorly optimized game running better, which is utter nonsense.

There is a vast gulf between not being able to do what FDev will have to do and not being able to do anything.
 
Odyssey is GPU limited due to the way how the draw calls are made. There's no logic for efficiency in it. It tries to draw several elements that are invisible in the final frame.
 
Odyssey is GPU limited due to the way how the draw calls are made. There's no logic for efficiency in it. It tries to draw several elements that are invisible in the final frame.
There is always a well-known solution to every human problem—neat, plausible, and wrong. - H. L. Mencken
 
Top Bottom