Temporal AA works by keeping geometric samples from previous frames. By doing this, it greatly reduces the need to sample geo, and so more samples can be combined on a given frame to produce MUCH better averaging.
Here is a very simple example.
You put a grid over a window and look through it to a chess board. A pixel is one of those grid holes. You need to write down a value for that grid, and the resulting set of values produces a bitmap that can be shown to someone else, reproducing what you see through that window.
Now, a human sees ALL of the view through each grid hole. But a computer has to fire an infinitely thin ray that hits part of that view, and the ray sends back the colour / luma value at that point. THAT is a sample.
If you use one sample, your average for that grid hole is going to be VERY wrong. This is called aliasing.
Lets imagine you can see the division between a white and black chess board tile through that hole. One sample may be white, or black. Which is correct? Neither, the view is not white or black. Its part of a white tile, and part of a black tile.
So, the important thing is to produce a grey value that represents the proportion of white to black tile visible through that grid hole. If we shoot 8 samples, we get a much better average. If we fire 16 samples, its even better. As we shoot more samples you get closer to a theoretical perfect average (convergence) . But you also use more and more GPU time working out what a single pixel will be.
There has to be a balance.
Traditionally, you choose your sample quantity based on your desired frame rate OR quality. Its a trade-off between the two. For a video game, the FPS is king, so we tend to use very low samples to get a high FPS.
TXAA keeps say the 8 samples from your first frame, so if we need to sample that area of the chess board in a following frame (even through a different grid hole), you now have 16 to work with. Yes, thats right, near double quality for near free.
Of course its not free as you have to store those samples, and manage them. And re-write your render pipeline to know they are there, and make use of them.
This is the basic gist of TXAA. Re-use of VERY valuable samples, to arrive at much higher averages (accuracy) per pixel.
Its amazing at handling aliasing when you have very small movements of the camera. Something that normally is AWFULL. If your view remains pretty consistent over a number of frames, TXAA will really buff the quality of what you can see.
TXAA does not work well, when you open your eyes say, or do a 180 degree spin. As the first frame in the new view has no previous frames sampled. But there are strategies to get around this.
But the real pay-off with TXAA is it allows you to use shading methods that are just not practical (to noisy, too much Ailising) with say 8 samples. For instance, the Ambient Occlusion we see in ED is view based. Its very hacky and low quality (though better than nothing, for sure). But with TXAA you can think about using Monte-Carlo based Ambient Occlusion (as used on movies). You can also think about using Reflection Occlusion again using Monte-Carlo (though this is more sensitive to view changes). And you can also do secondary illumination.
It opens up the engine to many more quality improvements in shading.
But the first thing it does, is fix the HORRIBLE aliasing we see in ED.
I bought a £900 GTX Ti to get away from this in VR, and with SS at 2.0, its still an issue, a BIG BIG issue for me. And it seems many playing ED.
If we had TXAA, I could turn my SS back to 1.0 in VR, and the aliasing would be gone, and Id be back in 90fps range. Even better, they could offer Monto Carlo Ambient Occlusion and secondary lighting effects and my 1080ti might still do this, and still keep above 90fps.
TXAA is a visual game changer. Its a sea change in quality.
So
PLEASE PLEASE PLEASE Frontier, spend some tech time implementing TXAA in the Cobra Engine.