TEMPORAL ANTI-ALIASING like this would be awesome in Elite Dangerous.

I'd be fine with Frontier implementing the superior D3D9-Render Path to the game as an optional parameter so I can freely force SGSSAA via Driver.
Guess this would be easier than to implement Temporal AntiAliasing, which is a blurry joke.

Someone above mentioned the Devs of Warframe recently (a year ago though...) implemented Temporal AntiAliasing and that it looks awesome in his opinion. Please force Warframe to run in D3D9 and force SGSSAA to redefine your definition of "awesome AntiAliasing".

Yet I'd also be interested for CTAA to be implemented in ED, so count me in! (Yet we all know by now how horribly bad Frontier is at implementing propetary features. Still waiting for Nvidia 3D Vision-Support. Wink Wink.)

Elite is not DX9, I think now it is dx11 now. Could be dx10 in some places.
 
Last edited:
This new [noob] Tempting Auntie Aisling :p thing would be awesome[yesnod]

Seriously though I really was blown away with that demo. That said, Im playing in an oculus rift CV1 with a 1080ti driving it, so really high / ultra quality settings and HMDSS @ 1.5 and the icy 5th planet of LTT377 where I was working the haz res for the Canonn bounty hunting CG was literally taking my breath away.

I did also notice that the license for the temporal anti aliasing is only $495 :eek: Why dont we start a kickstarter to buy FD a license for use in Cobra / ED?

http://www.livenda.com/buy-now
 
I hate these kind of before and after videos, it will take me weeks to return to my state of blissful ignorance where my brain automatically filters out the artifacts. In fact for the first 30 seconds of the video my thought was: meh, on and off look about the same. Then the filtering shutdown down and off became: oh god make it stop and on became: ooh that's better.

So thanks (even if it is 2 years old) for that ...
 
Last edited:
+1. The aliasing in Elite is one area that lets all the visuals down.

I can imagine not the easiest thing to implement with a compute shader generated terrain though :)
 
This hasn't crossed my mind... :(
Hopefully, Frontier will find a way to find those vertices!

Some discussion on this here

For tiled compute shader deferred renderers you can instead "bucket" per-sample pixels into a list that you build in thread group shared memory, and handle them separately after shading the first sample of all pixels.

I don't really follow but I'm sure the FD team understand the issues better than anyone.
 
Some discussion on this here.

It's an interesting discussion, albeit a little old. There's really no talk about temporal antialiasing here though, but I found this bit to be very important:
"In those example pictures, there's a lot of "information" missing -- e.g. lines that have pixels missing from them, so they've become dashed lines rather than solid lines.
Post-processing AA techniques (like FXAA) will not be able to repair these lines.
MSAA / super-sampling techniques will mitigate this issue, making the threshold for a solid line turning into a dashed line smaller.
"
This means that the aliasing and shimmering in Elite Dangerous won't be helped by MSAA, it'll only reduce the effect and not fix it.

We really NEED temporal antialiasing. ;)
 
Last edited:
I don't think bad anti-aliasing code is to blame, more the design choice of having so much fiddly high-contrast detail (which is partly what gives it scale). I'd be surprised if it looked smooth even rendering brute force at 8k or 16k resolution and scaled down.

Do we know how much of a frame-rate hit temporal anti-aliasing would be?

I've run at 2x super-sampling and the aliasing is somewhat improved, but still quite noticeable. It would be better if the subsample points were on a rotated grid, but that's not currently an option. The temporal AA frame-rate hit is claimed to be small.
 
It's an interesting discussion, albeit a little old. There's really no talk about temporal antialiasing here though, but I found this bit to be very important:
"In those example pictures, there's a lot of "information" missing -- e.g. lines that have pixels missing from them, so they've become dashed lines rather than solid lines.
Post-processing AA techniques (like FXAA) will not be able to repair these lines.
MSAA / super-sampling techniques will mitigate this issue, making the threshold for a solid line turning into a dashed line smaller.
"
This means that the aliasing and shimmering in Elite Dangerous won't be helped by MSAA, it'll only reduce the effect and not fix it.
MSAA helps on edges of polygons, with Alpha-to-coverage it applies also to transparent textures like fences or foliage.
 
MSAA helps on edges of polygons, with Alpha-to-coverage it applies also to transparent textures like fences or foliage.

So does SMAA, MLAA, CMAA etc but not as efficient as MSAA of course. :) But MSAA will not be able to counter single pixel lines. For that, the antialiasing method needs more information, and that's where the temporal part comes into play.
That's what this whole discussion is about. :)
 
  • Like (+1)
Reactions: NW3
Temporal AA works by keeping geometric samples from previous frames. By doing this, it greatly reduces the need to sample geo, and so more samples can be combined on a given frame to produce MUCH better averaging.

Here is a very simple example.

You put a grid over a window and look through it to a chess board. A pixel is one of those grid holes. You need to write down a value for that grid, and the resulting set of values produces a bitmap that can be shown to someone else, reproducing what you see through that window.

Now, a human sees ALL of the view through each grid hole. But a computer has to fire an infinitely thin ray that hits part of that view, and the ray sends back the colour / luma value at that point. THAT is a sample.

If you use one sample, your average for that grid hole is going to be VERY wrong. This is called aliasing.

Lets imagine you can see the division between a white and black chess board tile through that hole. One sample may be white, or black. Which is correct? Neither, the view is not white or black. Its part of a white tile, and part of a black tile.

So, the important thing is to produce a grey value that represents the proportion of white to black tile visible through that grid hole. If we shoot 8 samples, we get a much better average. If we fire 16 samples, its even better. As we shoot more samples you get closer to a theoretical perfect average (convergence) . But you also use more and more GPU time working out what a single pixel will be.

There has to be a balance.

Traditionally, you choose your sample quantity based on your desired frame rate OR quality. Its a trade-off between the two. For a video game, the FPS is king, so we tend to use very low samples to get a high FPS.

TXAA keeps say the 8 samples from your first frame, so if we need to sample that area of the chess board in a following frame (even through a different grid hole), you now have 16 to work with. Yes, thats right, near double quality for near free.

Of course its not free as you have to store those samples, and manage them. And re-write your render pipeline to know they are there, and make use of them.

This is the basic gist of TXAA. Re-use of VERY valuable samples, to arrive at much higher averages (accuracy) per pixel.

Its amazing at handling aliasing when you have very small movements of the camera. Something that normally is AWFULL. If your view remains pretty consistent over a number of frames, TXAA will really buff the quality of what you can see.

TXAA does not work well, when you open your eyes say, or do a 180 degree spin. As the first frame in the new view has no previous frames sampled. But there are strategies to get around this.

But the real pay-off with TXAA is it allows you to use shading methods that are just not practical (to noisy, too much Ailising) with say 8 samples. For instance, the Ambient Occlusion we see in ED is view based. Its very hacky and low quality (though better than nothing, for sure). But with TXAA you can think about using Monte-Carlo based Ambient Occlusion (as used on movies). You can also think about using Reflection Occlusion again using Monte-Carlo (though this is more sensitive to view changes). And you can also do secondary illumination.

It opens up the engine to many more quality improvements in shading.

But the first thing it does, is fix the HORRIBLE aliasing we see in ED.

I bought a £900 GTX Ti to get away from this in VR, and with SS at 2.0, its still an issue, a BIG BIG issue for me. And it seems many playing ED.

If we had TXAA, I could turn my SS back to 1.0 in VR, and the aliasing would be gone, and Id be back in 90fps range. Even better, they could offer Monto Carlo Ambient Occlusion and secondary lighting effects and my 1080ti might still do this, and still keep above 90fps.

TXAA is a visual game changer. Its a sea change in quality.

So

PLEASE PLEASE PLEASE Frontier, spend some tech time implementing TXAA in the Cobra Engine.
 
TXAA does not work well, when you open your eyes say, or do a 180 degree spin. As the first frame in the new view has no previous frames sampled.

PLEASE PLEASE PLEASE Frontier, spend some tech time implementing TXAA in the Cobra Engine.

Imo, quickly changing scenes aren't really a problem for TXAA. The human eye (generally) doesn't detect aliasing artifacts in that case. Aliasing is mostly a problem for slowly changing scenes, which is when TXAA works best.

Adding this to the Cobra engine would give a huge improvement to the visual quality, at a relatively small framerate reduction.

Yes, FD, please implement this. Maybe have some discussions with the company in the OP, as to how to incorporate their technology into your engine.

Personally, I'd buy an entire new season, just for this feature. [money]
 
Imo, quickly changing scenes aren't really a problem for TXAA. The human eye (generally) doesn't detect aliasing artifacts in that case. Aliasing is mostly a problem for slowly changing scenes, which is when TXAA works best.

Adding this to the Cobra engine would give a huge improvement to the visual quality, at a relatively small framerate reduction.

Yes, FD, please implement this. Maybe have some discussions with the company in the OP, as to how to incorporate their technology into your engine.

Personally, I'd buy an entire new season, just for this feature. [money]

Agreed. The monitor's refresh rate makes it rather impossible to notice aliasing when doing a quick 180.

Frontier will most likely be using the Cobra engine in all of their games so utilising TAA is really a no brainer IMO. Their rollercoaster games could also greatly benefit from it. :)
 
Yeah, fast moving camera moves don't need much in the way of AA. But I thought Id point it out.

Its an inherent weakness of the concept. Geo that has not been seen recently, cannot have samples stored for it.

But overall, its a very clever concept, and it works in practice as seen by several engines implementation of it.
 
Yeah, fast moving camera moves don't need much in the way of AA. But I thought Id point it out.

Its an inherent weakness of the concept. Geo that has not been seen recently, cannot have samples stored for it.

But overall, its a very clever concept, and it works in practice as seen by several engines implementation of it.

Especially since the only other approach that would provide similar AA would be 4x SS (16 samples per pixel), which would cut your framerate by a factor of 16. I actually tried this, but the game crashed (probably due to a lack of memory, since I have a 4K monitor). How? Both the game and the NVidia drivers allow 2x SS. NVidia calls it 4x (but it's 2x X and Y, which is the same as 2x in the game's options). Assuming my math is right, that comes to ~400 MB per frame buffer.

You can actually see what this looks like, if you take a high-res screen shot in game (Alt+F10) and shrink it with smooth resampling in your favorite image editor. It looks great, but it takes seconds to draw 1 frame. Here's an example (1920 x 1080, 3 MB)...


And here's the full high res (3840 x 2160, 9 MB (compressed)) image...

 
Last edited:
Rendering at higher resolution and sampling down, is not really the same as TXAA.

One is using more samples averaged from a SINGLE frame, so say taking 4 pixels in a 4K image using their average as a single pixel in a HD image.

And the other, is using samples from geometry, that is visible in THIS frame, and was (ideally) visible in the last 4 frames. So, say you use 8 samples per frame, then you end up with the equivalent of 32 samples if you use stored samples from four consecutive frames.

As you said, to get that kind of boost in samples, just by doubling an image and averaging down, is very brute force.

There ARE other advantages to TXAA as I understand it as well. And I think nVidia already uses some of this in its own render pipeline for VR on 1080 and above cards.

When you render the two images, one for each eye. Each eye seens near enough the same geo. Now, if you render 8 samples for one eye, those sample samples can be used to render pixels for the other eye. You store those samples in a buffer. I think nVidia then does the transform between the two eye spaces, to get the sample for the left eye say, to be usable for the right eye.

Its bit like TXAA.

So, if you are sharing samples for each eye on one frame, you can also do the same across time as well. So now you have a double bubble. One of the eyes renders much faster than the other, yielding a much higher frame rate. And with TXAA, they both make more efficient use of their shared samples over time. Its a win win.

Again, you can then evaluate that higher FPS, with anything over 90 being a waste, and consider using that headroom for extra quality for your shadows, or ambient occlusion and whatever.

The reason this is called Cinematic by some vendors, is because I believe the technique originated in Pixars Renderman (non realtime photo real renderer for Cinema). Where the same issues apply. Your sampling many times per frame, and then throwing away that data. Whereas it could be re-used on following frames.

Goes to show Recycling is a universally good concept.

Lets all bow to the Wombles.
 
Rendering at higher resolution and sampling down, is not really the same as TXAA.

I agree completely. My point was simply that this level of overkill is the only alternative to TXAA for a roughly comparable result. To do this at 4K res with a reasonable framerate would require a "GTX 1280Ti" video card (at least 2 generations ahead of the currently available cards).

That's based on the 4x SS test I mentioned earlier. Before the game crashed (on my 6 GB GTX 980Ti), I got a few partially rendered frames (of part of the ship's dashboard) at 2 FPS (according to Fraps).
 
Last edited:
Back
Top Bottom