Low GPU load in Odyssey in complicated scenes

[...]

That's just odyssey and the fact that some intelligent people decided sticking to Dx11 and not switching to Vulkan.
Well, it's not that easy, as far as i know (which is very little), even if they moved to DX12 they'd have to reengineer a lot of the rendering to actually take advantage from it, let alone if they switch to different APIs...
Considering FDevs delivery performances, that'd probably require a decade.
 
Yeah I agree on that. Wherever the issue is, I don't think it has anything to do with the algorithmic planet generation, as framerates are usually plenty stable when in the middle of nowhere on a planet. I can get 75fps even at native resolution there.

It's settlements that's the issue, and I can't quite figure out what exactly the CPU is doing there considering there's not really all THAT much going on there besides some basic AI pathing (assuming the settlement isn't on high alert) and you aren't covering all that much ground for the terrain generation to be taking up that many cycles. It's just...mystifying.

I think its the efficiency of the rendering pipeline and loading in resources on demand, its code and is either architected poorly or just poorly optimized.

I notice distinct stutters as I get close to a settlement, and when I move around and look in different areas. Its trying to load up the assets and render everything on-demand but they can't really subdivide the area logically easily to isolate how much they have to render in a single frame. Most of the settlement buildings have windows and its difficult for them to know if you can see inside those windows, even if they are on the other side of the building and you can get in trouble I think if you start trying to chop things up like that.

Some games like Doom 3 (I use this because thats the last time I did any level design etc in a game) had something called visportals and you used them to segment a map such that as long as you were on 1 side of say 2 portals (think windows lined up such that from 1 perspective there is some geometry preventing you from seeing through both windows at the same time) they would not load anything on the far side of the portals. You could use a hallway for example to isolate areas by having a 90 degree bend with a visportal in both hallways but until you enter portal 1, nothing past portal 2 would load. When you were in front of portal 1 it would load everything between portal 1 and 2, when you entered portal 1 it loaded the stuff past portal 2 even though you couldn't see it yet (ie you didn't round the corner yet).

Windows break this portaling type tricks so you have to either load everything in the whole settlement basically at once or, like they seem to be doing, take an approach where until you look in a direction or get close enough to a building, they don't load and render the stuff inside. If you have enough VRAM you can probably handle having the whole settlement loaded into VRAM, along with all textures and if your card is fast enough can render everything at a high frame rate. Once I've visited most/all of the settlement, I stop getting framerate drops.

I think they are "lazy loading" resources and geometry such that when you turn in a direction that you've not looked before at a closeish range (LOD having an impact here, if you are far enough away from a window I think they can safely say you can't see inside) they are having to do a bunch of stuff at once, stalling the rendering pipeline until the geometry etc is loaded into VRAM and can then be rendered, even for stuff you might not actually be able to see with the naked eye. The rendering pipeline is blocking and real time though, so each frame has to go through a bunch of different passes and sent to the monitor in a single-threaded way (more or less), and it doesn't take much to stall it unless you are way overpowered or have much lower settings.

The fixes will probably involve a) more intelligent asset loading where they better predict what you'll need and eager-load stuff onto hardware that can handle say the whole settlement geometry and textures, b) better multithreading where they are loading in resources before you need them in the background without stalling the pipeline, c) more tweaks to how settlements are divided up so that they can better tell when they need to load in resources (ie for lower end cards that definitely can't handle the whole thing).

I think c) is the hardest because the settlements are already generated and potentially are randomized. With scoped weapons its gonna be tough to tell if the player can see what windows and whether to render whats inside this building or that. Hopefully they can get most of the whole settlement in memory and then have ways to use LOD better to keep up the rendering speed such that you keep a good frame rate even if you lose some details in the distance (having fog/dust can help with that by hiding the fact that they aren't rendering long distance stuff as cleanly).

The settlements themselves are not bigger or more complex than most FPS games but the lack of hallways and especially all the windows definitely makes this more of a challenge so I can sympathize with the developers that its a more difficult challenge than it might seem but I think they unfortunately coded it in such a way that top end hardware doesn't get utilized that much better than lower end, they ended up creating a not-great experience for everyone across the board.
 
Ok, this is happening to me too. Has there been a fix for this in the latest update? I've been to the 'Issue Tracker' and it says 'Status expired'? I have the following setup:

Microsoft Windows 10 Professional (x64) Build 19042.1237 (20H2)
ASUS TUF GAMING Z590-PLUS WIFI
Intel Core i9-11900K Rocket Lake-S
GIGABYTE AORUS RTX 3080 Ti MASTER (GV-N308TAORUS M-12GD)
PCIe v4.0 x16 (16.0 GT/s) @ x16 (16.0 GT/s)
32GB (4 x 8GB) DDR4-3597 / PC4-28700 DDR4 SDRAM UDIMM
Samsung SSD 980 Pro 1TB (Primary Drive)
Samsung SSD 860 EVO 2TB

My GPU sits at 20% and my CPU at 25% Temps are circa 70⁰C - FPS in single digits as I wander around a base looking for bad guys.

If it's not been solved, what's the best way to deal with this? Apologies but I do not normally deal with forums and therefore know the necessary etiquette so many people seem to take for granted.

Thanks in advance
 
Have you tried recently? I have a 2080ti and since update 7 (I can't guarantee it was update 7 but it seemed to be) I started hitting 100% GPU on planets. I was already in the black when U7 dropped so I can't speak to settlements and stations, but I don't remember ever getting close to 100% GPU anywhere (highest spike was usually 80%, average more like 70% or under).
 
Good day sir!
On RX570 4GB + i53470 + 16GB 1666 RAM i am playing on 1440p with AMD FSR Ultra Quality mode.
I set fixed framerate to 30FPS also 29Hz for less stuttering when even there are more than 30 FPS.
My settings are CUSTOM but overal i would say ULTRA everything, all sliders to maximum except few options.
BLUR: DISALBED, DIRECTIONAL SHADOWS: HIGH, SPOT SHADOWS: LOW, AMBIENT OCCLUSSION: OFF, DEEP OF FIELD: OFF.
Those options arent necessary for good looks and in my opinion not necessary at all.
MSAA2 i set in ANTI-ALSIASING because on planet surfaces those surfaces at far distance look a lot better than any other anti aliasing method.

There are times FPS drops less than 30 and this changes in same area after game reload.
Optimisation is poor in this game.
 
There are definitely optimization issues. Settings don't seem to change much and it doesn't scale across different hardware well, it seems to maybe run better on higher level hardware, but that doesn't help people on low or mid level hardware. Planets are performing pretty well for me right now on pretty decent hardware (i7-8700k + 2080ti), but its at 100% GPU the entire time I'm close to the surface (if I go up 1km or so, it drops to 60% GPU as the rocks/life/geology is not visible anymore.
 
Vulcan and DX11/12 can not be intermixed as they are competing APIs. Rewriting Elite to use Vulcan instead of DX would not make sense
Where do you have this from? How do you explain Crysis Remaster using DX11 in conjuction with Vulkan for RT effects?
 
I just would like to know an answer from developers. Why Horizons in same situations in same areas in stations or in space runs as twice as better.
Just why? Same station interior from outside or inside looks same n bth but one runs 4K ultra 30FPS other runs 1080p medium baraly 30 fps!
 
Sometimes I get 60+fps, sometimes I get 30fps and sometimes I have to reboot my machine after a hard crash and all in the same sorts of situations. So I think there's something else going on beyond polygons, materials and lighting. It feels like a loading things in/out thing. But I think if there was some sort of strange bug they would have found it by now. And they're still talking about optimisations rather than "we found this huge bug".
 
Where do you have this from? How do you explain Crysis Remaster using DX11 in conjuction with Vulkan for RT effects?
Crysis is not using Vulkan, at least not natively. Like I said, to switch from DX11 to OpenGl/Vulkan would not make sense.
What has been used, however, is a bridge between DX11 and Vulkan...

Source
With hardware-accelerated ray tracing coming to Crysis Remastered though the additions of Vulkan ray tracing extensions to DirectX 11, we are somewhat baffled by Crysis Remastered's use of the legacy API.
 
Crysis is not using Vulkan, at least not natively. Like I said, to switch from DX11 to OpenGl/Vulkan would not make sense.
What has been used, however, is a bridge between DX11 and Vulkan...

Source
How does this makes sense, with what you said before? "Vulcan and DX11/12 can not be intermixed as they are competing APIs. "
 
Glad I found this thread, so I feel like this issue was better for me before but with this newest update I just jumped to the first CZ I would since that's usually the worst performance place and in space my GPU utilization hovers around 60-70% but I'm rock steady at my 119fps target, so the utilization seems to be working fine.

But once the CZ starts I'm hovering around 40fps with at max 66% gpu utilization, it bounces as low as 40% sometimes, once the CZ ends though, like immediately after the last person dies from the opposing team, I get a jump up to 55fps avg and the utilization going up to 70% but still going as low as 50%.

I'm playing at 4k, maxed everything on a 3080ti with the newest drivers. Also cpu utilization never goes above 30% on ground or in space, so I don't think it's that.

I took a bunch of screenshots, but apparently the nvidia performance overlay doesn't showup in the screenshots, so that's lame.

e- I dunno if detailed specs are important but if chance they are,

Windows 10
Ryzen 5 3600
Nvidia 3080ti ftw3
32gb Ram
Game installed on a NVME WDSN750
 
Last edited:
...

But once the CZ starts I'm hovering around 40fps with at max 66% gpu utilization, it bounces as low as 40% sometimes, once the CZ ends though, like immediately after the last person dies from the opposing team, I get a jump up to 55fps avg and the utilization going up to 70% but still going as low as 50%.

I'm playing at 4k, maxed everything on a 3080ti with the newest drivers. Also cpu utilization never goes above 30% on ground or in space, so I don't think it's that.

...

Windows 10
Ryzen 5 3600
Nvidia 3080ti ftw3
32gb Ram
Game installed on a NVME WDSN750
Allow me to provide you a (big maybe here) possible explanation...

1. Cobra engine does not use more than 8 threads at once, as many others, so a monstered CPU would not scale, but CPU clock might to some extent (see next point);

2. NPCs/AIs movement is/was coded using AVX instructions, which are quite heavy on the CPU and incidentally Ryzen and console CPUs do not support it. For these, they are converted into other set of instructions, so a bigger impact in performance is observed. Also a potential reason for console version delay;

3. Frontier is still working on recoding the AVX part of NPCs/AIs walkabouts and they even removed the "navmesh" from the game a few patches back... this navmesh removal also hints to AVX once "particle movement prediction" is one of the areas for Advanced Vectoring eXtensions (AVX) is widelly employed;

Of course there's still a LOT of optimization to be done to Odyssey and I'm still confident your current PC will be more than capable to run it with all bells and whistles. But until then, even a top notch system shall perform below expectations...
 
Last edited:
Glad I found this thread, so I feel like this issue was better for me before but with this newest update I just jumped to the first CZ I would since that's usually the worst performance place and in space my GPU utilization hovers around 60-70% but I'm rock steady at my 119fps target, so the utilization seems to be working fine.

But once the CZ starts I'm hovering around 40fps with at max 66% gpu utilization, it bounces as low as 40% sometimes, once the CZ ends though, like immediately after the last person dies from the opposing team, I get a jump up to 55fps avg and the utilization going up to 70% but still going as low as 50%.

My guess is it's the AI during combat, and it's probably using a single CPU thread (or maybe two). Therefore no matter how good your video card is, or how many cores on your CPU, it's going to be sitting around waiting for that work to complete before displaying the next frame.

(Even outside of combat, all the AI has to be calculated.)
 
Allow me to provide you a (big maybe here) possible explanation...

1. Cobra engine does not use more than 8 threads at once, as many others, so a monstered CPU would not scale, but CPU clock might to some extent (see next point);

2. NPCs/AIs movement is/was coded using AVX instructions, which are quite heavy on the CPU and incidentally Ryzen and console CPUs do not support it. For these, they are converted into other set of instructions, so a bigger impact in performance is observed. Also a potential reason for console version delay;

3. Frontier is still working on recoding the AVX part of NPCs/AIs walkabouts and they even removed the "navmesh" from the game a few patches back... this navmesh removal also hints to AVX once "particle movement prediction" is one of the areas for Advanced Vectoring eXtensions (AVX) is widelly employed;

Of course there's still a LOT of optimization to be done to Odyssey and I'm still confident your current PC will be more than capable to run it with all bells and whistles. But until then, even a top notch system shall perform below expectations...


I mean theres defintely something to this as far as overall FPS goes because like I said, as soon as the fighting is officially dead and all the NPCs go back to their presumably lower performance hogging AI patrolling routine, theres an instantaneous boost in fps and gpu utilization, but it's still nowhere near where it should be, or how this card performs in say Horizons 5 or any other optimized game.

But I guess what I'm wondering why a CPU hangup or anything of the sort would affect GPU utilization to such a degree, I mean not that it matters, in space everything seems to be working fine but once near a CZ the gpu utilization behaves poorly. Also womp womp on ryzen cpus being worse, but meh too much upgrading to switch to an intel board when I already have a nice x570 one.

And you're right I too am confident they'll get the optimization in order, I mean Horizons is still one of the best performing to visuals offered game in many years. But ya I just wanted to find this thread because there definitely seems to have been a degradation with this newest update. But I mean I still enjoy the CZs it's just you know, I just got this silly new gpu less than a month ago and it's a bit of a bummer I can't get 60fps in CZs when GPU utilization wise I should be easily able to hit that if I'm hovering around 40fps with 50-60% rates.
 
That's just odyssey and the fact that some intelligent people decided sticking to Dx11 and not switching to Vulkan.
Unfortunately, lirairies are not magic software that make a game run well or not. That's more subtile. A game that doesn't work well with Dx11 will not work well with Vulkan either.

Vulcan and DX11/12 can not be intermixed as they are competing APIs. Rewriting Elite to use Vulcan instead of DX would not make sense
An engine can support as many libraries as you want. In the past, there were usually 4 libraries supported by the games (Dx, Ogl, Glide and the internal "software"). And even today, there are engines that support Dx11, Dx12 and Vulcan (like the Serious Engine).

several light sources (light sources only sometimes).
I confirm. I lose 15%~25% framerate when I turn on the srv lights. However, this doesn't seem to happen with ship lights.
 
Greetings. I am experiencing low GPU load at the settlements (something around 40% according to Windows task manager). This comes with extremely low FPS values. The CPU load is heavy (60-70-80%), but not 100%. Changing settings and lowering resolution does not affect anything (I use 1080p). Changing driver does nothing. Using any version of FSR does nothing. PC specs are obsolete, I understand it - i5 3350/RX570/16Gb. However, the game just does not use all available resources as I see it. Does anyone have similar issues? Maybe some ideas why would this happen? If so, I created an issue at the issue tracker: https://issues.frontierstore.net/issue-detail/42302
Check your VRAM, you might find the bottleneck is there.
 
A good parallel is No Man's Sky, as it generates planets the exact same way as Elite does: algorithmically via a seed. This micro-spiking phenomenon is not present in NMS (i own it and have tested it). I'm certain there are other algorithmically-generated games out there that could back this up as well.

Besides, if Horizons was able to generate its planets algorithmically and not experience this micro-spiking of the CPU, there isn't really a justifiable reason Odyssey can't do that as well. Additionally, it isn't dynamically generating the planet from the seed actively while you are on it; that is generally done far before you ever get there (especially since you can view planets from the system map without even needing to be in the same star system). Once you land, the terrain has been generated, and all that it really needs to do after that is rendering and swapping out LOD models/textures.

I've also seen some claim that the poor performance is "normal" because it is "rendering an entire planet at once," which is wrong since any self respecting developer knows that frustrum and occlusion culling were specifically designed so that you didn't have to render everything at once. If EDO is rendering the entire planet constantly even when you aren't looking at it or can't see it, then the engine is broken.
That is interesting. Do you think they can solve this then? That the issue is really simply an issue?
 
Yesterday me and my friend were making some tests in the surface conflict zone. The interesting fact was that when the round was completed, the FPS raised from 30 (while in combat) to 40-45 (combat complete, all NPS are stationary). So I think that the part of the game controlling the NPCs does take a lot of resources since after the combat is over there is the same amount of NPCs but those are stationary.
 
Back
Top Bottom