For all I surmise, the HMD Quality setting should simply be a mutiplier on the dimensions of the rendered bitmaps, essentially the exact same thing as pixel density in the Oculus debug tool -- something that is offered by at least
a few other titles.
The difference between the "Supersampling" and "HMD Quality" options, offered by Elite Dangerous, would be that the latter detemines the size of the final output, relative to what was requested by the Oculus runtime -- the bitmaps are this size when handed over by the game to the Oculus runtime, which then has those larger, more detailed images to work with, when mapping them to the headset displays (accounting for physical resolution, lens distortion, chromatic abberration, (maybe, but probably not, subpixel layout), and head motion that has occurred after the frame started rendering), whereas "Supersampling" does the downsampling on its own, before handing the pictures on, sort of "baking in" the extra detail into its x1.0 output (much like mipmaps), instead of preserving it as long as possible up to the final presentation stage.
You can think of perceived improved clarity, when supersampling in this manner, in terms of the screen door effect: You look at the virtual world through its tiny obsucring mesh, and can see visual samples through each opening. Now you turn your head minutely, and those samples become hidden behind the strands of the screen door mesh, while others that were previously occluded come into view. You might call it something like a a natural temporal supersampling that your nervous system does all the time. Doc-Ok published a nice writeup just the other week, which included the concept:
http://doc-ok.org/?p=1631
However: 0.65 times 1.5 equals 0.975. Are you
sure you see a marked improvement in clarity over vanilla? -Because you are rendering a smaller image than you would with just x1.0 for both parameters, and is softened twice over - once by FXAA, and once from the upsampling from 0.65.
I don't know - maybe Elite adjusts its LOD and mipmap biases in unknown ways depending on the the two values (if not just the requested bitmap size), and maybe separately for text layers, but I have not experienced any sharper anything, myself, without actually oversampling.
So, I imagine the process would be something like:
ED: Hi, my name is Cobra engine, I would like to render you some stuff. How large do you want the eye textures?
OVR: Hello, Cobby! Make them
nx by
ny pixels, please.
ED: Okay Ovie. Actually, I'll make that
nx times HMDQ by
ny times HMDQ, if it's all the same to you...
OVR: Ok, sure.
( ED renders
n times HMDQ times SS, then resamples to
n times HMDQ )
ED: Here you are:
n times HMDQ size eye textures - enjoy.
OVR: Thank you, I will.