...good:
1.) Planet texture LOD is better to my eyes. As well as liking the terrain I'm seeing now. Ice worlds seem improved a good bit.
...
Bad:
...
3.) Pretty bad 'noise' or 'pixelation' effect now when the terminator on the planet is just entering or leaving view.
4.) Low angle texture on planets seems much worse to me now. Looks grainy or noisy even with built-in upscale.
...
The Very Bad:
1.) AMD FSR - I hate this. Worse performance and worse appearance I would love to have an option for the version from update 6.
...
I'd speculate the source of the problems in the: "bad" sections, fundamentally springs out of that first: "good"; Intimitely coupled with the game's lack of decent antialiasing.
Terrain LODs now have so much detail, both in frequency (how often elevation gradients change over distance) and amplitude (...by how much), that they just do not match up with output resolutions.
Take the crater outlook view from the plateau where the mission objective is located, in the Odyssey version of the SRV tutorial:
- With Terrain detail: "Ultra+" in update 6, the distant slopes of the crater, and the land beneath them, looks like nothing to me -- it is just a random jumble of gray and yellow pixels, without any discernible underlying undulating ground surface they would occur across - exacerbated by them inherently aliasing differently for the left and right eye, in VR. (It does of course not help that I look at them with the local star right behind me either, which makes the lighting rather flat.)
- Lower terrain quality settings mitigates this, but reduces terrain detail to unacceptable levels - everything becomes blurry.
- With update 5, the aliasing had been greatly ameliorated, to the point I could get the shimmering down a manageable degree, with: "Ultra for capture", by applying x1.25 supersampling. With update 6, not even x2.0 helps. This has nothing do with CAS or FSR - it should be all down to the LOD balancing.
Ever since its "alphas", Odyssey's planets have looked to me much like when you go at it with the spray can tool in a paint program on a monochrome screen - all areas are two-tone fuzzballs, whose aliasing causes them to "fizz", as your vantage point moves. This is exacerbated by the new materials, which often feel excessively shiny, to the point a low albedo surface seems able to go from white to black over only a degree or two of changing angle to the light source...
With Horizons, everything was blurry, endless pseudo-random noise (incidently: Due to its softenss, it actually seems to FSR pretty well, if a quick test I made is anything to go by); With Odyssey - a lot looks like endlessly dithering granite patterning, which looks a ton more interesting and varied, but quickly becomes so overwhelmingly dense that it comes across as noise
anyway. :7
The terrain generation team has the unenviable task of having to create a system that not only just-in-time composites multiple sources of elevation data (some of which likely likewise generated at time of being needed, as opposed to be loaded from a (much maligned) library of assets), to generate life-like patches of terrain geometry, and layers (and types-) of texture to go with them, containing additional detail to what is in meshes, before those details transition over to actual geometry in the next more detailed LOD (these textures are apart from the "wallpaper" ground textures); But need to do this for stretches of land that are unthinkable in most games.
In most games, the map makers will have been able to hand-model, or overnight pre-render a high fidelity, but low play-time rendering impact imposter for, say, a distant mountain - maybe with baked lighing and shadows - or if there is a day/night cycle and weather: A prebaked mesh and/or normal map that an adapted version of their dynamic lighting model can work with. They can also precalculate- and hand tailor occlusion culling, so that no matter where your are, no computing is wasted on evaluating stuff you can't see from there, anyway.
With the terrain generating as you play, however, there is no affording such pre-assembled background scenery, lighting, and optimisations - it has to all be produced on the fly, and lit dynamically, because the environment is unknown until generated, huge, and, although in itself static, moving within its star system.
So if a pixel on-screen shows, let's say a forest area: For an accurate reproduction its value should represent the average of all that is contained within the part of that forest area that the pixel projects over, including that tiny snow-covered clearing in its middle, but we do of course not have the infinite resources to sample at such resolution for something that just takes up a single pixel on the rendered frame, so we just look at what is right in the centre of that pixel, which can be either the tree coverage,
or the tiny clearing, making the pixel flick between green and white as we look around - this is aliasing - in the case of textures usually dealt with through the use of mipmaps; polygon edges requires different dealings with.
Conversly, if that's not a clearing down there, but something like a lake, in the middle of a forest, maybe it
should cause overpowering glinting when observed from the right angle, beyond the percentage of land it takes up - much like our sparkly snow. It quickly becomes complex. :7
...and all this is just the bit where you render frames of the already generated LODs. -The generation of the LODs itself confers a stage of aliasing, as you sample from your various heightmap sources, bashing them together, and try to match the resolution of the produced LOD to how many pixels it will be drawn across on-screen (...so dependent on distance from the observer, resolution, and field of view), in order to avoid both opposing extremes of aliasing and blurring. Here too, you can not afford to average millimetre scale detail into a one-texel-for-an-acre LOD, and can't have predone mipmaps of a lower scale heightmap, since it will itself become the product of several source ones when you get closer, so you end up taking data only from the larger scale heightmaps, and thus produce soft low resolution normal maps, and consequently mile-wide ice craters that look like little plastic toys... but then, when going to rendering frames, maybe you draw a ground texture onto that low resolution bottom layer, that has magnitudes higher resolution, and which aliases so strongly that it looks like somebody has poured baking soda on the ice (...although one would assume it
should have mipmaps)...
If LOD biasing is an insurmountable problem, with the range of scales from astronomical to human on ground, maybe we need to swallow the cost of (render time) filtering balanced more toward quality than performance as some stage, or maybe the terrain procgen needs to implement an extra factor (assuming it doesn't already), to produce LODs that are built with anisotropy in mind, so that the distant LODs in a vista seen from the ground gets lower mesh density, than the same seen from above (...but the same texture density - we do have anisotropic filtering (...and hopefully mipmaps of generated textures), supposedly)... I am just blathering anyway...
FSR is not the
cause of a problem here; Only the magnifying glass that makes it stand out that much more clearly. -As it blows up its source frames, it inherently also brings every imperfection within them to front, "crawling ant" jaggies and all (I got one plain that I was landed on, appearing as if it was "swimming" across the view when I took off). Hence AMD's stressing that the source imagery must be rendered with good antialiasing. This
might be slighty less of a problem with DLSS (at the cost of other artifacts), since it (to oversimplify) has temporal antialiasing (TAA) "built in".
(Somebody might feel inclined to bring up UE5's "Nanite"... If so, they'll have to themselves bring technical insight into its inner workings and shortfalls, because I have none. :7)