While all that makes great sense, it doesn't actually explain the fact that when you crank up supersampling on the Vive you get to rift clarity levels. The part with debug tool from other poster was interesting, I also stumbled upon an interesting pixel density examples in E: D : https://forums.oculus.com/community/discussion/51256/pixel-density-examples#latest
Raising the supersampling on the Vive helps to counteract the slightly lower sweet-spot pixel resolution. Rendering more pixels, then down-sampling back to the panel resolution still improves the overall image. Obviously at a cost - each rendered frame takes more GPU power to create. Using a supersampling/HMD Quality setting of 1.25x means 1.56x more pixels are being rendered. Supersampling is a brute-force method of improving image quality.
However, given the same hardware, a Rift user can bump the supersampling up the same amount, and thus see an improved image too, which would still end up looking better than the Vive at the same increased settings.
There's no doubt that subtle differences in the optics and fresnel lens design have small but noticeable impacts on the VR user experience.
But its up to all of us individually, because we all see differently, and perceive our vision differently - to identify (or ignore) what annoys us about the VR experience. What one VR user finds limiting about the slightly narrower field of view in the Rift is another users boon being able to see slightly finer detail, especially text. Both major headset designs (and all the others) have trade-offs.
I still find it amazing, and we're only just starting out.