Odyssey space has "too much contrast" - breaking down the rendering of a frame

I, as some many other people got appalled by the hideous luminance
distribution and color saturation issues exhibited by the Odyssey
renderer. Since I'm a computer graphics developer myself and do consulting
work for critical graphics systems (think medical imaging, CT, MRI, and
such) I took a dive in the sequence of steps that go into rendering
a single frame of Odyssey.

For those curious about this: This is not really data mining in the
strictest sense, what you do is, you take a graphics debugger like
RenderDoc, or Nvidia Nsight Graphics, which places itself between the
program under test and the graphics API and/or driver and meticulously
records every\* command that goes into drawing a frame. Then you can go
through the list of drawing commands step by step and see how the picture
is formed.

The TL;DR is, it's not as a simple issue as a merely wrongly set gamma
value or wrong tonemapping curve. What's actually going on is, that
rendering a single frame of Odyssey consists of several steps, between
which intermediary images are transferred; and unfortunately some of these
steps are not consistent about what the color space is of the
intermediaries that are passed between them.

I'm not going to break this down in detail, so that you, my fellow
players, are able to understand what's going on.

Step 1: An environment map in the form of a cubemap is transferred from
a cubemap texture to a cubemap renderbuffer target. This is essentially
just a copy, but it would be possible to make color adjustments here
(which doesn't happen). This copy step is done mostly to convert the color
format from a limited range, compressed format (BC6) to a HDR half
precision floating point format (R11G11B10 float), which will be used for
the rest of the rendering process until the final tonemapped downsampling
into the swapchain output (i.e. the image sent to the monitor). There's
something peculiar about this cubemap though, and I encourage everyone
reading this to take a close look at these images and think about those,
too, throughout the rest of the explanation.

01_cubemap_xn.jpg

01_cubemap_xp.jpg

01_cubemap_zp_target.jpg


Step 2: Skybox rotation. In this step a low resolution version of the
skybox is drawn rotated into a cubemap renderbuffer, so that the result is
aligned with the ship in space.

02_skybox_turn_in.jpg


Step 3: IMHO this is the most questionable pass captured in this frame.
What's happening here is, that a couple of the target reticle variants are
drawn, several times, and in a very inefficient matter at that. This pass
generates over 800 events, and I've seen it in all of the frame dumps of
Odyssey I've been looking so far. It should be noted though, that the
event numbers in the timeline are not strictly proportional to the actual
time spent on this pass. But still this is what I came to call myself the
what pass.

03_reticle_pass.jpg


Step 4 & 5: Next comes a "mystery" pass. It's a compute pass, and without
reverse engineering the compute shader and tracing the data flow forward
and backward I can't tell what they do. I mostly ignored them.

Step 6: Here the shadow map cascade is generated. I.e. the ship is
rendered from the direction of the primary light source, each pixel
encoding the distance of that particular visible surface of the ship to
the light source. This information is used in a later step to calculate
the shadows.

06_shadow_map.jpg


Step 7: This is a deferred rendering G-buffer preparation pass.
Essentially what's happening is, that all the information about the
geometry of the model, together with several "optical" properties (color,
glossyness, surface structure) is encoded into pixels of several render
targets (i.e. images). These intermediaries do not contribute to the final
image directly, but are used as ingredients for the actual image
generation. However of particular note here is the image view format in
which the diffuse component render buffer has been configured: This has ben
set as sRGB, which effects that rendering of this intermediary ought to
happen in linear RGB, but the value stored in this buffer are translated
into sRGB color space (to a close approximation sRGB is RGB with a gamma
of 2.4 applied; strictly speaking there's a linear part for small values
and a differentiable continued power curve for large values). This is
something that has the potential to cause trouble in deferred renderers,
but I am under the impression, that it's fine here.

07_diffuse_buffer.jpg


Steps 8 & 9: These steps essentially perform a copy of the depth buffer
and the gbuffer into new render targets (presumably to resolve their
framebuffers).

Steps 10: Calculate Screen Space Ambient Occlusion (SSAO)

10_ssao.jpg


Step 11: SSAO is combined with gloss

Step 12 & 13: Shadow mapping and shadow blurring

12_1_shadow_projection.jpg


Step 14 & 15: Deferred Lighting and Ambient. At the end of these steps the
image of the cockpit geometry is finished and ready for composition with
the rest of the scene. Take note that I had gamma toning enabled for this
display and as you can see, it looks correct.

14_deferred_lighting.jpg


This is a crucial moment to take note the following: The intermediary
image of the cockpit geometry is rendered to an HDR render target, and the
contents of this target are linear RGB
. Or in other words: Everything is
fine up to this point (except for that weird what pass).

Step 16: This prepares the view of the ship in the sensor display.
 
Step 17: Finally, this is where the trouble begins, as the skybox and the
stars are rendered into the same image that also holds the cockpit
geometry view. From this moment on, the values in those pixels are doomed:
No amount of clever tone mapping can bring them into comparable contrast
ranges, after they have been baked together.

17_1_skybox_pass_stars.jpg


Also it seems and if the skybox might already have its tones gamma mapped,
but I'm not sure about that. But if the skybox is pre gamma mapped, this
would be a mistake.

Still at this point the image still looks okayish. Barnards Loop is
clearly discernible and also the Milky Way is faintly visible. We can blow
up the tone ranges, and also bypass to see how this look with and without
gamma and different contrast.

17_y_skybox_pass_gamma_bypass.jpg

17_z0_skybox_pass_range.jpg


Step 18: In this step the HUD is drawin in, element by element. Here
I only show the final result. *Of particular note is, that the HUD
elements look as they do in "normal" screenshots if gamma bypass is
enabled for the debug display.*
This means that all the HUD elements are
rendered in with the basic gamma ramp already pre applied. So if we
presume that the render target so far is supposed to be in linear RGB,
then the HUD elements will inevitably get gamma transformed to a higher
power, which is not correct; the result will be a far larger contrast on
the HUD elements with things being either really bright, or really dark.

18_HUD_gamma_bypass.jpg

and here with an additional subsequent gamma transform
18_HUD_gamma.jpg


After this the final tonemapping pass is applied. And this is where the
digestive end product hits the rotary air impeller
. In the pixelshader for
this tonemapping step is applied what could be understood as a gamma
transform. The resulting image, in linear RGB already looks as the
presumably desired end result. It still looks okay, the HUD is readable,
albeit a little bit overcontrast and oversaturated, we can still clearly
see Barnards Loop and a hint of the Milky Way. I'd not deem this
"perfect", but perfectly acceptable within the constraints of a product
release.
20_tonemapping.jpg

However, because the image is untyped, the graphic system will
apply a gamma transform of its own, with the familiar, unpleasant result.
20_tonemapping_gamma.jpg

One thing that stands out about this tonemapping step is a constant found
within the pixel shader: 64500. That value is awefully close to 65535,
i.e. 2\*\*16-1. My guess is that a (junior?) developer had trouble with
the value ranges, dumped an image, took its maximum value x, punched 1/x
into their calculator and rounded to the next multiple of 100 (why 100 and
not 128?).
20_tonemapping_pixelshader.jpg

Now remember back to the first step, the cubemap: That thing looks good.
And it's aligned with the ship, so clearly this cubemap is not
prerendered, but rendered somwhere else. So these frame captures don't
tell the whole story, there are even more renderers involved there,
operating independently.


Among "experienced" grey beard developers it's common knowledge, that the
code structure of a program reflects the organizational structure of the
entity that developed it. These frame command dumps are a reflection of
the code structure that produce them. So by second order indirection this
allows me to take a few guesses. The whole thing reeks of a communications
problem. Within the generation of a frame there seem to be (at least)
three different "teams" working on the same thing: The people who develop
the astronomical renderer, the people who develop the ship's cockpit
renderer and the UI renderer (which produces that HUD) team, and I got the
impression, that there's a severe lack of communication between them. It's
at the interface between them, where the mishaps occur.
 
I, as some many other people got appalled by the hideous luminance
distribution and color saturation issues exhibited by the Odyssey
renderer. Since I'm a computer graphics developer myself and do consulting
work for critical graphics systems (think medical imaging, CT, MRI, and
such) I took a dive in the sequence of steps that go into rendering
a single frame of Odyssey.

For those curious about this: This is not really data mining in the
strictest sense, what you do is, you take a graphics debugger like
RenderDoc, or Nvidia Nsight Graphics, which places itself between the
program under test and the graphics API and/or driver and meticulously
records every\* command that goes into drawing a frame. Then you can go
through the list of drawing commands step by step and see how the picture
is formed.

The TL;DR is, it's not as a simple issue as a merely wrongly set gamma
value or wrong tonemapping curve. What's actually going on is, that
rendering a single frame of Odyssey consists of several steps, between
which intermediary images are transferred; and unfortunately some of these
steps are not consistent about what the color space is of the
intermediaries that are passed between them.

I'm not going to break this down in detail, so that you, my fellow
players, are able to understand what's going on.

Step 1: An environment map in the form of a cubemap is transferred from
a cubemap texture to a cubemap renderbuffer target. This is essentially
just a copy, but it would be possible to make color adjustments here
(which doesn't happen). This copy step is done mostly to convert the color
format from a limited range, compressed format (BC6) to a HDR half
precision floating point format (R11G11B10 float), which will be used for
the rest of the rendering process until the final tonemapped downsampling
into the swapchain output (i.e. the image sent to the monitor). There's
something peculiar about this cubemap though, and I encourage everyone
reading this to take a close look at these images and think about those,
too, throughout the rest of the explanation.

View attachment 229928
View attachment 229929
View attachment 229930

Step 2: Skybox rotation. In this step a low resolution version of the
skybox is drawn rotated into a cubemap renderbuffer, so that the result is
aligned with the ship in space.

View attachment 229932

Step 3: IMHO this is the most questionable pass captured in this frame.
What's happening here is, that a couple of the target reticle variants are
drawn, several times, and in a very inefficient matter at that. This pass
generates over 800 events, and I've seen it in all of the frame dumps of
Odyssey I've been looking so far. It should be noted though, that the
event numbers in the timeline are not strictly proportional to the actual
time spent on this pass. But still this is what I came to call myself the
what pass.

View attachment 229933

Step 4 & 5: Next comes a "mystery" pass. It's a compute pass, and without
reverse engineering the compute shader and tracing the data flow forward
and backward I can't tell what they do. I mostly ignored them.

Step 6: Here the shadow map cascade is generated. I.e. the ship is
rendered from the direction of the primary light source, each pixel
encoding the distance of that particular visible surface of the ship to
the light source. This information is used in a later step to calculate
the shadows.

View attachment 229934

Step 7: This is a deferred rendering G-buffer preparation pass.
Essentially what's happening is, that all the information about the
geometry of the model, together with several "optical" properties (color,
glossyness, surface structure) is encoded into pixels of several render
targets (i.e. images). These intermediaries do not contribute to the final
image directly, but are used as ingredients for the actual image
generation. However of particular note here is the image view format in
which the diffuse component render buffer has been configured: This has ben
set as sRGB, which effects that rendering of this intermediary ought to
happen in linear RGB, but the value stored in this buffer are translated
into sRGB color space (to a close approximation sRGB is RGB with a gamma
of 2.4 applied; strictly speaking there's a linear part for small values
and a differentiable continued power curve for large values). This is
something that has the potential to cause trouble in deferred renderers,
but I am under the impression, that it's fine here.

View attachment 229935

Steps 8 & 9: These steps essentially perform a copy of the depth buffer
and the gbuffer into new render targets (presumably to resolve their
framebuffers).

Steps 10: Calculate Screen Space Ambient Occlusion (SSAO)

View attachment 229936

Step 11: SSAO is combined with gloss

Step 12 & 13: Shadow mapping and shadow blurring

View attachment 229937

Step 14 & 15: Deferred Lighting and Ambient. At the end of these steps the
image of the cockpit geometry is finished and ready for composition with
the rest of the scene. Take note that I had gamma toning enabled for this
display and as you can see, it looks correct.

View attachment 229938

This is a crucial moment to take note the following: The intermediary
image of the cockpit geometry is rendered to an HDR render target, and the
contents of this target are linear RGB
. Or in other words: Everything is
fine up to this point (except for that weird what pass).

Step 16: This prepares the view of the ship in the sensor display.
This is a fascinating and educative read and a wonderful piece of investigation.
 
Too bad fdev doesn't read the forum. You should have posted this as a youtube video...that seems to get their attention.

What's sad is that their operation probably would have taken a month+ to identify this problem as directly as done in the OP. What's worse is that it wasn't identified in the months leading up to release.
 
This makes perfect sense and confirms what I've been thinking in my head...

As a CG generalist, the first thing I said when I looked at the dark rendering was the color space is off.

I was almost certain it either didn't have the final linear > sRGB gamma applied, or it was double gamma'd.


I don't do the tweeter, but someone should tweet this at FDEV.
 
Last edited:
Step 17: Finally, this is where the trouble begins, as the skybox and the
stars are rendered into the same image that also holds the cockpit
geometry view. From this moment on, the values in those pixels are doomed:
No amount of clever tone mapping can bring them into comparable contrast
ranges, after they have been baked together.

View attachment 229939

Also it seems and if the skybox might already have its tones gamma mapped,
but I'm not sure about that. But if the skybox is pre gamma mapped, this
would be a mistake.

Still at this point the image still looks okayish. Barnards Loop is
clearly discernible and also the Milky Way is faintly visible. We can blow
up the tone ranges, and also bypass to see how this look with and without
gamma and different contrast.

View attachment 229940
View attachment 229941

Step 18: In this step the HUD is drawin in, element by element. Here
I only show the final result. *Of particular note is, that the HUD
elements look as they do in "normal" screenshots if gamma bypass is
enabled for the debug display.*
This means that all the HUD elements are
rendered in with the basic gamma ramp already pre applied. So if we
presume that the render target so far is supposed to be in linear RGB,
then the HUD elements will inevitably get gamma transformed to a higher
power, which is not correct; the result will be a far larger contrast on
the HUD elements with things being either really bright, or really dark.

View attachment 229942
and here with an additional subsequent gamma transform
View attachment 229943

After this the final tonemapping pass is applied. And this is where the
digestive end product hits the rotary air impeller
. In the pixelshader for
this tonemapping step is applied what could be understood as a gamma
transform. The resulting image, in linear RGB already looks as the
presumably desired end result. It still looks okay, the HUD is readable,
albeit a little bit overcontrast and oversaturated, we can still clearly
see Barnards Loop and a hint of the Milky Way. I'd not deem this
"perfect", but perfectly acceptable within the constraints of a product
release.
View attachment 229944
However, because the image is untyped, the graphic system will
apply a gamma transform of its own, with the familiar, unpleasant result.
View attachment 229946
One thing that stands out about this tonemapping step is a constant found
within the pixel shader: 64500. That value is awefully close to 65535,
i.e. 2\*\*16-1. My guess is that a (junior?) developer had trouble with
the value ranges, dumped an image, took its maximum value x, punched 1/x
into their calculator and rounded to the next multiple of 100 (why 100 and
not 128?).
View attachment 229947
Now remember back to the first step, the cubemap: That thing looks good.
And it's aligned with the ship, so clearly this cubemap is not
prerendered, but rendered somwhere else. So these frame captures don't
tell the whole story, there are even more renderers involved there,
operating independently.


Among "experienced" grey beard developers it's common knowledge, that the
code structure of a program reflects the organizational structure of the
entity that developed it. These frame command dumps are a reflection of
the code structure that produce them. So by second order indirection this
allows me to take a few guesses. The whole thing reeks of a communications
problem. Within the generation of a frame there seem to be (at least)
three different "teams" working on the same thing: The people who develop
the astronomical renderer, the people who develop the ship's cockpit
renderer and the UI renderer (which produces that HUD) team, and I got the
impression, that there's a severe lack of communication between them. It's
at the interface between them, where the mishaps occur.


Thank you for such a deep analysis of the rendering system. I like the way you think, it’s very logical and makes it (relatively) easier to understand such a complex topic.

It sounds like the process really falls apart towards the end, but I didn’t quite understand the final sentence:

> However, because the image is untyped, the graphic system will apply a gamma transform of its own, with the familiar, unpleasant result.

What’s an ‘untyped image’? And are you saying a gamma (contrast?) layer is effectively being applied twice because it’s untyped?

Also, FDev would have seen the final result prior to alpha. I can only assume they are ok with the dark output, or are things like this difficult to change?
 
What’s an ‘untyped image’? And are you saying a gamma (contrast?) layer is effectively being applied twice because it’s untyped?

Also, FDev would have seen the final result prior to alpha. I can only assume they are ok with the dark output, or are things like this difficult to change?

It's like meta data that goes with image data that tells the underlying graphics system how to interpret the image data. It sounds like this is taking the default approach and not sending a specific type so the graphics layer (direct x) is making an assumption and doing some standard things it does with that default type. Which in this case becomes redundant applications of gamma corrections.

And fdev doesn't care about releasing a "complete" or "good" product. This isn't about aesthetically things being darker may be what they want.. The images are wrong on both ends of the spectrum, things that are bright are too bright, things that are dark are too dark and there are even negatives (like when using the FSS and seeing the milky way as a negative). What we got in this release is wrong. It can't in any way be mistaken as a creative choice.
 
And fdev doesn't care about releasing a "complete" or "good" product. This isn't about aesthetically things being darker may be what they want.. The images are wrong on both ends of the spectrum, things that are bright are too bright, things that are dark are too dark and there are even negatives (like when using the FSS and seeing the milky way as a negative). What we got in this release is wrong. It can't in any way be mistaken as a creative choice.

Very well said!
 
What is black stays black, but some parts of the picture can be still "rescued/recovered" thanks to postprocessing/shaders:

A7yvlzE.png


EZjy0e4.png
 
Yeah, you can polish the turd a tad...I found a decent combo of reshade filters that made it look better in ship, but as soon as you got on a planet, especially inside a building, it was way too bright and blown out since the gamma for on planet stuff seems to be somewhat better to start with.
 
Yeah, you can polish the turd a tad...I found a decent combo of reshade filters that made it look better in ship, but as soon as you got on a planet, especially inside a building, it was way too bright and blown out since the gamma for on planet stuff seems to be somewhat better to start with.
Nvidia Ansel has profiles... with hotkeys :ROFLMAO::ROFLMAO::ROFLMAO:
 
Back
Top Bottom