Testing frame rates on 4 different machines. Will update.

I'm running Odyssey at 1080p on a laptop with a modest 6GB GTX 1660 Ti and an i5-9300H (four cores w/HT) with 16GB RAM. Most settings on medium, some on high and terrain on ultra. I'm oddly enough getting mostly passable frame rates (mostly, vsynced @ 60 Hz) when flying in space, docking at stations and on foot on the majority of planet surfaces. Usually I can hit the vsync limit at 60 fps in those circumstances with the notable exception of concourses and any settlements or dense POI's where I get anything from 35 to 55-ish.

However I'm having a weird issue.

Whenever I look up close at any large surface rock my frame rates tank from 60 fps (vsync limited) down to around 50 fps or so. It doesn't matter if the rock in question is surrounded by other heavy rock scatter or if it's a lone surface boulder sitting all by itself on a planet surface. Now, one would expect that filling the viewport with a single, huge and relatively simple rock mesh shouldn't affect frame rates to that extent. Something is fishy there...

I've submitted a ticket in addition to emailing my full DxDiag and details to the collection email for Odyssey performance data as I thought the problem was specific enough to warrant a ticket as well. Is anyone else experiencing the same?
 
I'm running Odyssey at 1080p on a laptop with a modest 6GB GTX 1660 Ti and an i5-9300H (four cores w/HT) with 16GB RAM. Most settings on medium, some on high and terrain on ultra. I'm oddly enough getting mostly passable frame rates (mostly, vsynced @ 60 Hz) when flying in space, docking at stations and on foot on the majority of planet surfaces. Usually I can hit the vsync limit at 60 fps in those circumstances with the notable exception of concourses and any settlements or dense POI's where I get anything from 35 to 55-ish.
Not directly related to your issue, but does the Nvidia driver have an anti-tearing mode that doesn't require vsync? Over in AMD land, we've got a thing that nearly eliminates tearing without the downsides of vsync. I've been using it for over a year in both this game and Path of Exile and it works wonders - the latter is a top down action RPG, so tearing is extra hideous there if it happens.
 
My config Window 10 build 19041 SSD-150Go dedacated for windows only
intel i5 6600k 3.5ghz 4core HyperV enable
MSI 970ti oc 4g vram
16g ram
SSD-1To dedicated for games installation only
MotherBoard Asustek Z170 pro gaming


<BlurEnabled>true</BlurEnabled>
<AOQuality>0</AOQuality>
<DOFEnabled>1</DOFEnabled>
<BloomQuality>2</BloomQuality>
<EnvmapQuality>1</EnvmapQuality>
<MaterialQuality>1</MaterialQuality>
<EnvironmentQuality>0</EnvironmentQuality>
<FXQuality>2</FXQuality>
<GalaxyMapQuality>1</GalaxyMapQuality>
<GUIColourQuality>0</GUIColourQuality>
<TerrainQuality>1</TerrainQuality>
<TerrainLodBlendingQuality>0</TerrainLodBlendingQuality>
<SurfaceMaterialQuality>2</SurfaceMaterialQuality>
<JetConeQuality>2</JetConeQuality>
<VolumetricsQuality>0</VolumetricsQuality>
<ShadowQuality>1</ShadowQuality>
<TextureQualityEx>2</TextureQualityEx>
<AAMode>0</AAMode>
<TextureFilterQuality>3</TextureFilterQuality>
<SurfaceSamplerQuality>0</SurfaceSamplerQuality>
<PerformanceQualitySetting>0</PerformanceQualitySetting>
<ResolutionSetting>0</ResolutionSetting>
<LODDistanceScale>0.900000</LODDistanceScale>
<HMDRenderTargetMultiplier>1.000000</HMDRenderTargetMultiplier>
<SSAAMultiplier>0.750000</SSAAMultiplier>
<GpuSchedulerMultiplier>0.000000</GpuSchedulerMultiplier>
</Root>
<DisplayConfig>
<ScreenWidth>1920</ScreenWidth>
<ScreenHeight>1080</ScreenHeight>
<VSync>false</VSync>
<FullScreen>1</FullScreen>
<PresentInterval>1</PresentInterval>
<Adapter>0</Adapter>
<Monitor>0</Monitor>
<DX11_RefreshRateNumerator>60000</DX11_RefreshRateNumerator>
<DX11_RefreshRateDenominator>1000</DX11_RefreshRateDenominator>
<LimitFrameRate>false</LimitFrameRate>
<MaxFramesPerSecond>30</MaxFramesPerSecond>
<GraphicsOptions>
<Version>1</Version>
<PresetName>Custom</PresetName>
<StereoscopicMode>0</StereoscopicMode>
<IPDAmount>0.001000</IPDAmount>
<AMDCrashFix>false</AMDCrashFix>
<FOV>56.249001</FOV>
<HumanoidFOV>56.249001</HumanoidFOV>
<HighResScreenCapAntiAlias>3</HighResScreenCapAntiAlias>
<HighResScreenCapScale>4</HighResScreenCapScale>
<GammaOffset>-0.272040</GammaOffset>
<DisableGuiEffects>false</DisableGuiEffects>
<StereoFocalDistance>25.000000</StereoFocalDistance>
<StencilDump>false</StencilDump>
<ShaderWarming>true</ShaderWarming>
<VehicleMotionBlackout>false</VehicleMotionBlackout>
<VehicleMaintainHorizonCamera>false</VehicleMaintainHorizonCamera>
<DisableCameraShake>false</DisableCameraShake>
</GraphicsOptions>


almost 60fps "small" station
30 - 60fps "big station"
150-350 fps in space
55-90fps on srv or on foot
from 20fps to 60fps on settlement depending of how many npc there is inside.

conflict zone: night 20-30 day 25-40

I had the worst fps in mining settlement like ~20 - 35 if I'm near extractor beams
 
However I'm having a weird issue.

Whenever I look up close at any large surface rock my frame rates tank from 60 fps (vsync limited) down to around 50 fps or so. It doesn't matter if the rock in question is surrounded by other heavy rock scatter or if it's a lone surface boulder sitting all by itself on a planet surface. Now, one would expect that filling the viewport with a single, huge and relatively simple rock mesh shouldn't affect frame rates to that extent. Something is fishy there...
I started a thread yesterday about the massive frame rate drops when I stand near an internal window. I get 60fps around most station interiors, but it's 30 fps when I look through the window at Vista/Astra/Pioneer. At the same time, my CPU and GPU usage drop (I can even hear the fans doing less work while my FPS is plunging)
 
Not directly related to your issue, but does the Nvidia driver have an anti-tearing mode that doesn't require vsync? Over in AMD land, we've got a thing that nearly eliminates tearing without the downsides of vsync. I've been using it for over a year in both this game and Path of Exile and it works wonders - the latter is a top down action RPG, so tearing is extra hideous there if it happens.

Both AMD's Enhanced Sync and NVIDIA's Fast Sync use vsync to eliminate tearing, but it's also true they allow frame rate to run nearly unconstrained.
 
Don't know how useful this info may be but here goes:

CPU: Intel i9 9900K
GPU: RTX 2080 Super
RAM: 32 GB
Installed on NVMe drive

Running at 1080p with no super sampling; I'm averaging anywhere between 70 and 90 FPS while inside of stations and around 70ish in (mostly) empty settlements. All of which is a significant improvement of around 10-20 higher FPS on average compared to before the update.

Surface Conflict Zones, however, still run at the same ~40 FPS they did before the update. Even when I turn all settings down from High to the lowest possible, the FPS does not change whatsoever.
 
I can confirm that main RAM timings are a dominant factor in my case (i7-4790K, 32GiB DDR3-2400, Radeon RX 6900 XT, 2560x1440@120Hz with high settings, blur and depth of field off). Reloading XMP profile #2 in BIOS caused it to jump from rarely going above 50 during tutorial, to sitting around 70 and rarely dipping below 60. This doesn't cure the slow resource leak I was hunting, but raises the baseline considerably. Also not confirmed is whether it's mainly latency or bandwidth constrained.

During gameplay I did encounter a rather large settlement that dropped the frame rate all the way to 30 (I had also adjusted to Ultra as base settings). However, the GPU was not busy; it was CPU/RAM limited still.
I happen to have the same CPU and RAM as you (but a GTX 1080 GPU). Thanks for that feedback, but what do you mean by 'reloading xmp profile' exactly??
 
No change after patch with a GTX 1060 6GB, still 45 FPS on default high settings at 1080p at the planetary surface.
 
I happen to have the same CPU and RAM as you (but a GTX 1080 GPU). Thanks for that feedback, but what do you mean by 'reloading xmp profile' exactly??
Not all DIMMs have XMP information in them, and you may be able to tune further, but memory overclocking is a rather complicated topic. XMP may or may not be an overclock at all, but it is manufacturer's recommended settings. Often the difference between e.g. different frequency RAM sticks is only what profile is loaded.
 
Not all DIMMs have XMP information in them, and you may be able to tune further, but memory overclocking is a rather complicated topic. XMP may or may not be an overclock at all, but it is manufacturer's recommended settings. Often the difference between e.g. different frequency RAM sticks is only what profile is loaded.
Ok, I was using the OC profile on my asrock motherboard, so I guess I should already be using the high performance of the ram, but I’ll check again.
 
Top Bottom