NVidia seems to be touting MSGAA (G for Geometry buffer) as the new hotness for deferred rendering at the moment, showcasing the latest update to Eve:Valkyrie as an example.
Let's see what the boffins makes of its pros and cons. :7
Thanks for the tip! Rep++
It took a fair bit of Googling to get some actual details on the new AA method. Here's a link:
http://research.nvidia.com/sites/de...ffer-Anti-Aliasing/AGAA_I3D2015_Final_Web.pdf
TL;DR: It looks really good, but there are some serious limitations as to when it can be used. The biggest is that MSGAA requires a GTX 1070, 1080, or 1080 Ti graphics card. That excludes many NVidia and all AMD users. It also requires that the materials must be "unified", that is, no custom shaders for different materials, like skin, water, or hair and all shading inputs must be "filterable" (whatever that means in this context).
Primarily, this is a way to reduce memory usage (compared to supersampling) and is a speed up, by determining which samples contribute the most towards the "fragments" (sub-pixels). In the examples shown, NVidia is only using 2-samples-per-pixel along geometric boundaries (edges) which visually compares favorably to 8-samples-per-pixel supersampling. It's 54% faster.
It looks like this is simpler to implement than Temporal AA, but both techniques require changes to the rendering pipeline. The video card limitations mean it's unlikely that FD will implement this technique any time soon.