It looks like you've found your comfortable tradeoff spot, but anyway...
"HMD Quality" multiplies the render target size requested by the VR runtime, and delivers it rendered frames of the resolution produced by this multiplication. There is limited sense in messing with combinations of multipliers in Pimax software, SteamVR and HMD Quality -- they all just contribute to what the final frame size will be, and save the matter of how fine-grained each can be adjusted, a change in any single one of them can produce the exact same result as giving them different values that simply cancel one another out.
In-game "Supersampling"
also multiplies the render target size (after HMDQ has made its contribution to it, so both apply),
but takes upon
itself to resample the rendered frame back to the unmultiplied size, before handing it over.
This is why you want to use HMDQ (or Pimax/SteamVR multipliers) as your first choice of the two, to give the VR runtime compositor as much detail as possible to pick from, when it compensates for lens distortion, and maps the result to the display panels; In-game SS can be added on top of HMDQ-etc in the hypothetical situation where you want to go higher than a combined x2.0 (same as 400%), because realtime filtering often begins to skip pixels from the source image at that point, instead of including every one comprising the sample area for an output pixel, so if you want the benefit of any such extra-extra additional render resolution, you may be best off getting it "baked it", so to speak, beforehand).
It is also used when rendering for displaying on a monitor, which would be the reason it helps when on foot -- it (presumably) supersamples the developer-preset-resolution 2D view, that is rendered and then mapped to the subsequently rendered cinema screen view scene in VR (which does "respect" the requested frame size from the VR runtime).
Other than for cases like the above, personally I strongly disagree with any alchemical messing with combinations of subsampling, but that's from my personal preferrence of hating so much as the slightest bit of blur even more than I do aliasing. Others obviously have different sensibilities.
On other notes: When I have used Pimax headsets with my (slightly older) hardware setup (EDIT: ...and older Pimax software), I have had to disable Hardware-accelerated GPU Scheduling in Windows graphics settings, or things would get extremely stuttery and jittery, but that is probably neither here nor there for your situation...
If you think you could live with foveation, it is possible you could eke out an extra frame per second, or maybe two, by force-injecting Variable Rate Shading, using either OpenXR Toolkit, or VRPerfkit (for OpenVR (i.e. SteamVR)), especially given how much the combination of more than 100° of field of view, and Parallel Projections, expands the size of the render target. I know the latter allows you to set the size of the foveation radii, and believe the former does too, and the former maaaaay also be able to utilise the eye tracking in the Super, to move the foveation around, but I do not know about that...
(EDIT: Are you really getting as low as the 1440p resolutions you suggest, by the way? Base render target resolutions are usually ca. 1.4 times display panel resolution, to account for how the lenses compress pixels in the centre, and anisotropy adds more on top of that, as FOV gets larger -- especially with parallel projections. I do notice larger numbers by your SteamVR resolution sliders.)
(EDIT2: Ah, err, never mind -- somehow I got into my head you had managed to bagsy an early
Crystal Super, rather than one of the older ones; Disabused of this particular bit of lack of attention, things square up better in my head...

)