Any solution to increase FPSs?

I would like to know more about this because I am suspecting a CPU bottleneck in my case, but I am not an expert. I have an old i7 4820k (from 2013) + 24GB DDR3 RAM + RTX 2070 Super. So far I have had 0 problems running any game at 4k60FPS, including Battlefield V or the last Call of Duty. I played Horizon smoothly and without any performance issue, but with Odyssey, I am experiencing the same awful performance as many others here. I can't even think of getting close to a planetary basement.

No matter the settings, I don't get even 30 stable FPS on foot whether I play full low or full ultra. According to MSI Afterburner, GPU load is 50% ish, CPU load the same, so perhaps no bottleneck? Do you suspect that I might be having a CPU issue? It's disheartening because the expansion looks promising but right now is just money wasted.

You may very well be CPU limited with a 3.7-3.9GHz Sandy Bridge-E part, especially if any given logical core is near full reported utilization. However, this should not be assumed, despite low GPU utilization.

The most straightforward way to see if there is actually a CPU bottleneck is to manipulate the CPU speed and see how much performance changes. This should be easy to do with an unlocked 4820K. Just go into your motherboard's BIOS setup, reduce the CPU multiplier by 5x (knock it down to 32x for 3.2GHz or so) and then run the game again in a similar scenario and see what the frame rate is. If it's a proportional reduction, you were predominantly CPU limited. If not, you're probably GPU limited, even though the GPU isn't fully utilized...it's stalling on something and more CPU performance probably won't help.

A very repeatable test (the suit tutorial) and monitoring software capable of logging frametimes (MSI After or NVIDIA FrameView, for example) will help a lot, though if the difference is large enough (unlikely) you can just eyeball the in game frame rate gauge.

Also, make sure you've done the basic performance diagnostic stuff...complete reset of in-game graphics settings, any driver profiles for the game, cleared shader cache, etc, and rebooted the system, just to rule out potential configuration issues that could skew results.
 
What frame rate are you targeting?
Honestly, I'd be happy with a reasonably steady 60 fps in stations lobbies. On planetary surfaces I can get an average of 70 fps but it swings wildly and even when it does not, the lag is terrible at times.

Horizons is no real issue to run.
 
Honestly, I'd be happy with a reasonably steady 60 fps in stations lobbies. On planetary surfaces I can get an average of 70 fps but it swings wildly and even when it does not, the lag is terrible at times.

Horizons is no real issue to run.

I find it very unlikely that the 6700K is a major issue at these frame rates. The game isn't well threaded enough to really need more than 4c/8t threads and the lightly threaded performance of the 6700K is still very competitive (6th through 10th gen Intel Core processors are all Skylake or Skylake derived and they only gained about 20% in clock speed in that entire run). I'm not CPU limited at 60 fps with a 3900X using an overclocked RX 6800 XT and a 6700K is only about 10% slower in lightly threaded tasks.

That said, if you do notice a significant performance increase from your new board and CPU, I'd be interested in hearing about it.
 
I find it very unlikely that the 6700K is a major issue at these frame rates. The game isn't well threaded enough to really need more than 4c/8t threads and the lightly threaded performance of the 6700K is still very competitive (6th through 10th gen Intel Core processors are all Skylake or Skylake derived and they only gained about 20% in clock speed in that entire run). I'm not CPU limited at 60 fps with a 3900X using an overclocked RX 6800 XT and a 6700K is only about 10% slower in lightly threaded tasks.

That said, if you do notice a significant performance increase from your new board and CPU, I'd be interested in hearing about it.
I'll let you know, but Battlefield V is running rough as well. I've always run low latency setting for better performance but I could run a mix of mostly medium settings with some high and now that seems a little out of reach. I don't know if a game update along the way or a Windows update made that untenable, but the reality is the game lags now if I take it off of low latency.

Battlefield 1 runs on Ultra as before but last night when I opened up EVGA precision to monitor the GPU and Windows performance monitor the game got really laggy. As soon as I dropped the monitoring software the GAME PLAYED AS EXPECTED.

Again, I'm not expecting huge performance gains, but I expect the games to run smoother and the GPU to run closer to 100% utilized. The most I can get out of the processor now is mid 80s now matter what I do.

I had a similar experience when Battlefield 1 came out. I had an I5 3570k, GTX970 was running like 45% and the CPU was locked solid at 100%. I overclocked it enough so that the GPU would run close to 100% and the CPU was in the low 90's but it was evident that I needed something more. The I7 6700K was solid upgrade.
 
I'll let you know, but Battlefield V is running rough as well. I've always run low latency setting for better performance but I could run a mix of mostly medium settings with some high and now that seems a little out of reach. I don't know if a game update along the way or a Windows update made that untenable, but the reality is the game lags now if I take it off of low latency.

Battlefield 1 runs on Ultra as before but last night when I opened up EVGA precision to monitor the GPU and Windows performance monitor the game got really laggy. As soon as I dropped the monitoring software the GAME PLAYED AS EXPECTED.

Again, I'm not expecting huge performance gains, but I expect the games to run smoother and the GPU to run closer to 100% utilized. The most I can get out of the processor now is mid 80s now matter what I do.

I had a similar experience when Battlefield 1 came out. I had an I5 3570k, GTX970 was running like 45% and the CPU was locked solid at 100%. I overclocked it enough so that the GPU would run close to 100% and the CPU was in the low 90's but it was evident that I needed something more. The I7 6700K was solid upgrade.

Hard to compare EDO in it's current state to any title I can think of. For most games, low GPU utilization is a pretty good indicator of CPU bottleneck, at least if one isn't VRAM constrained. However, that doesn't appear to be the case with Odyssey. It's bottlenecks seem to be due to issues on the software side of things causing extremely lopsided utilization/stalls on the the GPU side of things (on systems with half-way decent CPUs anyway).

This is from the initial release, but I still see patterns like this in Hotfix 3, where there are plenty of free logical (and physical) cores, and no core is over 50-60% utilized, but the GPU utilization still falls significantly:
Source: https://www.youtube.com/watch?v=TI-QmVKeGiE


I have to play it at 4k ultra for the game to register as consistently using most GPU cycles. To me this indicates that all that something (that should likely be culled or not drawn) not significantly affected by resolution (so not raw fill rate and probably not most shader work, but possibly geometry or depth buffer updates) is causing the GPU to wait if fill rate demands fall below a certain level. It's nothing on the CPU side of things, else the CPU utilization would spike as frame rate increased. It's not a system memory performance issue as the reported memory bandwidth utilization is very low and the memory subsystem of this setup relatively fast (1867FLCK 1:1 with DDR4-3733). I could likely put the fastest Zen 3 or Rocket Lake part money could buy in here, hook it up to a dual -stage cascade, run it at -80C and throw crap tons of voltage at it to hold 5.5GHz+ and I still probably wouldn't gain a single frame, even in those situations where GPU utilization is reported as less than 50%.
 
Hard to compare EDO in it's current state to any title I can think of. For most games, low GPU utilization is a pretty good indicator of CPU bottleneck, at least if one isn't VRAM constrained. However, that doesn't appear to be the case with Odyssey. It's bottlenecks seem to be due to issues on the software side of things causing extremely lopsided utilization/stalls on the the GPU side of things (on systems with half-way decent CPUs anyway).

This is from the initial release, but I still see patterns like this in Hotfix 3, where there are plenty of free logical (and physical) cores, and no core is over 50-60% utilized, but the GPU utilization still falls significantly:
Source: https://www.youtube.com/watch?v=TI-QmVKeGiE


I have to play it at 4k ultra for the game to register as consistently using most GPU cycles. To me this indicates that all that something (that should likely be culled or not drawn) not significantly affected by resolution (so not raw fill rate and probably not most shader work, but possibly geometry or depth buffer updates) is causing the GPU to wait if fill rate demands fall below a certain level. It's nothing on the CPU side of things, else the CPU utilization would spike as frame rate increased. It's not a system memory performance issue as the reported memory bandwidth utilization is very low and the memory subsystem of this setup relatively fast (1867FLCK 1:1 with DDR4-3733). I could likely put the fastest Zen 3 or Rocket Lake part money could buy in here, hook it up to a dual -stage cascade, run it at -80C and throw crap tons of voltage at it to hold 5.5GHz+ and I still probably wouldn't gain a single frame, even in those situations where GPU utilization is reported as less than 50%.
My main reason for doing this is not for Odyssey. I'm more looking at other titles both existing and down the road. When I upgrade my GPU I don't want anything to hold me back. I want to run on high to ultra settings at 144hz on my 1440p wide screen and my current CPU will be an issue., an I9-10900 won't.
 
hotfix 3 delivered some inprovements (fixed the low poly models и cripple textures) but FPS dropped even more (even in space where i did not have a problem)
absolutely identical scene and settings.
in Horisont - solid locked 60 FPS
in Odyssey - 28 FPS -no VR for me((( and thats in space- in station in same places i get like 18-20 fps.
i am not sure if going the right direction or not.
fps hprizons 60.JPG
fps odysee 28.JPG
 
Hard to compare EDO in it's current state to any title I can think of. For most games, low GPU utilization is a pretty good indicator of CPU bottleneck, at least if one isn't VRAM constrained. However, that doesn't appear to be the case with Odyssey. It's bottlenecks seem to be due to issues on the software side of things causing extremely lopsided utilization/stalls on the the GPU side of things (on systems with half-way decent CPUs anyway).

This is from the initial release, but I still see patterns like this in Hotfix 3, where there are plenty of free logical (and physical) cores, and no core is over 50-60% utilized, but the GPU utilization still falls significantly:
Source: https://www.youtube.com/watch?v=TI-QmVKeGiE


I have to play it at 4k ultra for the game to register as consistently using most GPU cycles. To me this indicates that all that something (that should likely be culled or not drawn) not significantly affected by resolution (so not raw fill rate and probably not most shader work, but possibly geometry or depth buffer updates) is causing the GPU to wait if fill rate demands fall below a certain level. It's nothing on the CPU side of things, else the CPU utilization would spike as frame rate increased. It's not a system memory performance issue as the reported memory bandwidth utilization is very low and the memory subsystem of this setup relatively fast (1867FLCK 1:1 with DDR4-3733). I could likely put the fastest Zen 3 or Rocket Lake part money could buy in here, hook it up to a dual -stage cascade, run it at -80C and throw crap tons of voltage at it to hold 5.5GHz+ and I still probably wouldn't gain a single frame, even in those situations where GPU utilization is reported as less than 50%.
I think you just have too many logical cores :) There's not many games that will balance for 24 logical cores CPUs. I see a balanced usage on the 12 logical cores of my Ryzen.

Low FPS with CPU and GPU not maxed is rare. Most of the times when I encountered it, it was :
  • Fill rate limited due to particle effects (mainly smokes). I'm not sure you can be fill rate limited today with objects (unless drawing thousands).
  • Drawcalls limited when trying to draw too many objects (either not culled or not batched). Drawcalls are draw instructions from CPU to GPU (draw triangles, select texture, etc).
So far i have i have measured some heavy drawcalls numbers in some rare situations. Which would lower my fps with CPU/GPU not maxed.
But most of the time, my GTX 1080 is maxed when my FPS goes under 60fps.
 
Last edited:
My main reason for doing this is not for Odyssey. I'm more looking at other titles both existing and down the road. When I upgrade my GPU I don't want anything to hold me back. I want to run on high to ultra settings at 144hz on my 1440p wide screen and my current CPU will be an issue., an I9-10900 won't.

Yeah, I gathered that. I'm just saying that you shouldn't expect much of an uplift in EDO, not until Frontier sorts this out.

hotfix 3 delivered some inprovements (fixed the low poly models и cripple textures) but FPS dropped even more (even in space where i did not have a problem)
absolutely identical scene and settings.
in Horisont - solid locked 60 FPS
in Odyssey - 28 FPS -no VR for me((( and thats in space- in station in same places i get like 18-20 fps.
i am not sure if going the right direction or not.View attachment 231417View attachment 231418

I don't think there is a right direction until memory utilization is addressed directly. It will always be a trade off between performance and quality until the underlying issues are dealt with. This is a bandaid to make screenshots look better.

I think you just have too many logical cores :) There's not many games that will balance for 24 logical cores CPUs. I see a balanced usage on the 12 logical cores of my Ryzen.

There aren't enough threads to go around; the game defaults to six worker threads, apparently two render threads, then the video driver loads two threads, and xaudio has some load to it.

That said, it's highly unlikely to be a CPU limitation when no single thread ever cracks 80% and even the two highest loaded render threads spend most of their time around 60%.

Low FPS with CPU and GPU not maxed is rare. Most of the times when I encountered it, it was :
  • Fill rate limited due to particle effects (mainly smokes). I'm not sure you can be fill rate limited today with objects (unless drawing thousands).
  • Drawcalls limited when trying to draw too many objects (either not culled or not batched). Drawcalls are draw instructions from CPU to GPU (draw triangles, select texture, etc).
So far i have i have measured some heavy drawcalls numbers in some situations. Which would lower my fps with CPU/GPU not maxed.
But most of the time, my GTX 1080 is maxed when my FPS is under 60fps.

From other analyses, I'm sure the game is making many more draw calls than necessary, both because of absent culling, and other more general inefficiencies. Still, if a DX11 game is draw call limited, shouldn't I expect to see a render thread, or subthread handling command lists, be maxing out a logical core?
 
From other analyses, I'm sure the game is making many more draw calls than necessary, both because of absent culling, and other more general inefficiencies. Still, if a DX11 game is draw call limited, shouldn't I expect to see a render thread, or subthread handling command lists, be maxing out a logical core?
I think we should see one or two cores with a high cpu usage. This was the case with older dx11 versions, used in games where i saw overkill drawcalls numbers.
But i'm just a software dev, not a directX expert :) so i don't know what we should see today.

Here's a nice image from the intel dev site

performance-methods-practices-directx-fig01-779912.png

and the article from 2018 if someone is interested
Performance, Methods, and Practices of DirectX* 11 Multithreaded Rendering
 
Yeah, I gathered that. I'm just saying that you shouldn't expect much of an uplift in EDO, not until Frontier sorts this out.
So I got the new motherboard and CPU installed last night, set the ram to its XMP profile and updated the drivers... I will do a clean install of the OS this weekend.

I went in to Battlefield 1 and, without changing any settings, got an increase of 35 fps. I was very impressed with not only with increased fps, but with the smoothness and responsiveness of the system. I know it sounds odd, the but it looked visually better as well, clearer overall. I couldn't get over how crisp and pleasing the whole game rendered after the upgrade. On top of that the CPU just loafs at 30-40% usage and my GPU is slammed at 100%. I was able to run ultra at 100 fps where before I could barely do that with low settings.

Odyssey was another story, it was better inside the stations, often peaking at 100 fps and a low of 75 where it was 80 max and 60 was low before. Once on foot and entering the social area, it was unchanged. The good new is Hotfix 3 got rid of the lags so it was playable. The CPU usage dropped to less than 30% and the GPU usage pegged at 100% in the stations.

This proves I was CPU bound to some extent, not as bad as when Battlefield 1 came out and I was running the I5 3570K. If I had managed to purchase a newer graphics card, it would have made only minor improvements. While I'm disappointed that Odyssey didn't improve more, I'm pleased with the overall gains in Battlefield 1, the increased usage of the GPU and really low usage of the CPU means that when I do upgrade the GPU, I will get a really nice increase in performance.
 
So I got the new motherboard and CPU installed last night, set the ram to its XMP profile and updated the drivers... I will do a clean install of the OS this weekend.

I went in to Battlefield 1 and, without changing any settings, got an increase of 35 fps. I was very impressed with not only with increased fps, but with the smoothness and responsiveness of the system. I know it sounds odd, the but it looked visually better as well, clearer overall. I couldn't get over how crisp and pleasing the whole game rendered after the upgrade. On top of that the CPU just loafs at 30-40% usage and my GPU is slammed at 100%. I was able to run ultra at 100 fps where before I could barely do that with low settings.

Odyssey was another story, it was better inside the stations, often peaking at 100 fps and a low of 75 where it was 80 max and 60 was low before. Once on foot and entering the social area, it was unchanged. The good new is Hotfix 3 got rid of the lags so it was playable. The CPU usage dropped to less than 30% and the GPU usage pegged at 100% in the stations.

This proves I was CPU bound to some extent, not as bad as when Battlefield 1 came out and I was running the I5 3570K. If I had managed to purchase a newer graphics card, it would have made only minor improvements. While I'm disappointed that Odyssey didn't improve more, I'm pleased with the overall gains in Battlefield 1, the increased usage of the GPU and really low usage of the CPU means that when I do upgrade the GPU, I will get a really nice increase in performance.

The Frostbite engine has always been very well threaded and responds well to faster processors.

Your experience with EDO is not far off mine; the only places I could prompt a CPU limitation were where I was already getting a pretty good framerate...the slowest areas (surface settlements and the concourse) did not respond much to changes in CPU performance. The main difference is that my reported GPU utlization often wasn't near 100%, unless I jacked up my render resolution.
 
Back
Top Bottom