Parallel projection... pimax 5k

As #heavygroovz mentions

Performance is terrible because you need to have parallel projection compatibility enabled in PiTool. This means a colossal render target. 2080Ti struggles even in Small FOV.


What can be done about this? Do Frontier need to implement a solution or can Pimax do something with their software?
 
Last edited:
Official Pimax post regarding the introduction of PPC :

https://forum.pimaxvr.com/t/about-60-70-performance-improved-on-rtx2080ti/9072


jojon2se posted a very good explanation of what parallel projections actually is on Reddit a while back :

https://www.reddit.com/r/skyrimvr/comments/9tfoxr/skyrimvr_pimax_5k_gtx_1070_7090fps_for_dummies/

"It's when the two rendered viewplanes, for the left and the right eye, are placed in-line with one another, so that they both face straight forwards.

This is fine, when the field of view for each eye is about the same to the left as it is to the right (EDIT2: ...and the FOV is rather small), but when you add 25-35 degrees on one side, your game camera frustum gets rather lopsided, with the amount of pixels you end up rendering per degree of the added peripheral vision going up by the tangent of the view angle, as the triangle from the camera, to the viewplane directly ahead, to the viewplane out to the side, gets ever more obtuse, and the far side longer (EDIT: non-linearly).

...so as long as we are still using a single flat viewplane per eye, you'll want to make the cameras a bit wall-eyed, so to speak, in order to get back toward the camera looking perpendicularly at the centre of the viewplane, in order to minimise this avalanche of rendering more stuff than you can make use of, in the place where it is least needed.

The viewplanes can be set up so that they align with the canted screens of devices like the new Pimaxes, or StarVR, or XTal, but it is not necessary - they can be reprojected in post.

The Use Parallel Projection option in PiTool exists as a compatibility mode, for games/engines that were not written with the idea in mind, that anybody would ever have the cameras looking anywhere but directly forwards."


Games that have VR support retro fitted generally require PPC.

Games that use engines such as Unreal and Unity - both with strong native VR support and the ability to handle canted viewplanes - do not require it.

Pimax can further optimise but it is down to Frontier to do the grunt work in the Cobra engine.
 
Last edited:
One solution would be for Frontier to implement Multi-View Rendering

MVR_white_003.png


Benefits

Multi View Rendering expands on the Single Pass Stereo features, increasing the number of projection centers or views for a single rendering pass from two to four. All four of the views available in a single pass are now position-independent and can shift along any axis in the projective space.

This unique rendering capability enables new display configurations for Virtual Reality. By rendering four projection centers, Multi View Rendering can power canted HMDs (non-coplanar displays) enabling extremely wide fields of view.

Details

Single Pass Stereo was introduced with Pascal and uses the Simultaneous Multi-Projection (SMP) architecture of Pascal to draw geometry only once, then simultaneously project both right-eye and left-eye views of the geometry. This allowed developers to almost double the geometric complexity of VR applications, increasing the richness and detail of their virtual worlds.

Turing’s Multi-View Rendering (MVR) capability is an expansion of the Single Pass Stereo (SPS) functionality introduced in the Pascal architecture that processes a single geometry stream across two different projection centers for more efficient rendering of stereo displays. The simultaneous projection of both right-eye and left-eye views of the geometry. This allowed developers to almost double the geometric complexity of VR applications, increasing the richness and detail of their virtual worlds.

MVR expands the number of viewpoint projections from two to four, helping to accelerate VR headsets that use more displays. SPS also only allowed for eyes to be horizontally offset from each other with the same direction of projection. With Turing MVR, all four of the views available in a single pass are now position-independent and can shift along any axis in the projective space. This enables VR headsets that use canted displays and wider fields of view. While the general assumption in stereo rendering is that eyes are just offset from each other in the X dimension, in practice, human asymmetries may need adjustments in multiple dimensions to more precisely fit individual faces. Turing helps to accelerate applications built for headsets with these customizations.

(source : Nvidia VR Works SDK - https://developer.nvidia.com/vrworks)



VRWorks also includes Variable Rate Shading, Multi-Res Shading, Lens Matched Shading and Single Pass Stereo. All of which were designed specifically for VR render pipeline optimisation.

Frontier was a pioneer in VR implementation with Elite supporting Oculus and OpenVR but sadly they no longer continue to innovate by adopting cutting edge rendering techniques for VR.

They are clearly capable of doing so but no doubt the engineering effort is considerable and the associated development cost has no tangible fiscal return. Consequently the work remains deep in the technical debt backlog :rolleyes:
 
Last edited:
One solution would be for Frontier to implement Multi-View Rendering



Benefits

Multi View Rendering expands on the Single Pass Stereo features, increasing the number of projection centers or views for a single rendering pass from two to four. All four of the views available in a single pass are now position-independent and can shift along any axis in the projective space.

This unique rendering capability enables new display configurations for Virtual Reality. By rendering four projection centers, Multi View Rendering can power canted HMDs (non-coplanar displays) enabling extremely wide fields of view.

Details

Single Pass Stereo was introduced with Pascal and uses the Simultaneous Multi-Projection (SMP) architecture of Pascal to draw geometry only once, then simultaneously project both right-eye and left-eye views of the geometry. This allowed developers to almost double the geometric complexity of VR applications, increasing the richness and detail of their virtual worlds.

Turing’s Multi-View Rendering (MVR) capability is an expansion of the Single Pass Stereo (SPS) functionality introduced in the Pascal architecture that processes a single geometry stream across two different projection centers for more efficient rendering of stereo displays. The simultaneous projection of both right-eye and left-eye views of the geometry. This allowed developers to almost double the geometric complexity of VR applications, increasing the richness and detail of their virtual worlds.

MVR expands the number of viewpoint projections from two to four, helping to accelerate VR headsets that use more displays. SPS also only allowed for eyes to be horizontally offset from each other with the same direction of projection. With Turing MVR, all four of the views available in a single pass are now position-independent and can shift along any axis in the projective space. This enables VR headsets that use canted displays and wider fields of view. While the general assumption in stereo rendering is that eyes are just offset from each other in the X dimension, in practice, human asymmetries may need adjustments in multiple dimensions to more precisely fit individual faces. Turing helps to accelerate applications built for headsets with these customizations.

(source : Nvidia VR Works SDK - https://developer.nvidia.com/vrworks)



VRWorks also includes Variable Rate Shading, Multi-Res Shading, Lens Matched Shading and Single Pass Stereo. All of which were designed specifically for VR render pipeline optimisation.

Frontier was a pioneer in VR implementation with Elite supporting Oculus and OpenVR but sadly they no longer continue to innovate by adopting cutting edge rendering techniques for VR.

They are clearly capable of doing so but no doubt the engineering effort is considerable and the associated development cost has no tangible fiscal return. Consequently the work remains deep in the technical debt backlog :rolleyes:

Kickstarter!!!!!!!
 
What can be done about this? Do Frontier need to implement a solution or can Pimax do something with their software?

Apparently, Pimax has already done their homework, although we may ask if they were successful, and now it is up to Frontier. I have been poking about Pimax rendering pipeline lately (on my 5k+) using ED as a testing vehicle, so I have some data, if anyone is interested:

The hit of parallel projections on seems to be 50% pixels more on my setup (1080 Ti). Here are the reports from vrcompositor:

Code:
PP on, Pi Tool 1.0, Normal FOV


****************************************** Begin GPU speed ******************************************
MeasureGpuMegaPixelsPerSecond(): Returning 885 MP/sec. Total CPU time 0.10 seconds.
GPU Vendor: "NVIDIA GeForce GTX 1080 Ti" GPU Driver: "25.21.14.1771"
GPU speed from average of 6 median samples: 883
HMD driver recommended: 3852x3291 90.0Hz HiddenArea(8.73%) = 2082 MP/sec
Raw ideal render target scale = 0.42
New render target scale = 0.42 = 2496x2133. Total CPU time 0.10 seconds.
******************************************* End GPU speed *******************************************


PP off, PiTool 1.0, Normal FOV


****************************************** Begin GPU speed ******************************************
MeasureGpuMegaPixelsPerSecond(): Returning 885 MP/sec. Total CPU time 0.10 seconds.
GPU Vendor: "NVIDIA GeForce GTX 1080 Ti" GPU Driver: "25.21.14.1771"
GPU speed from average of 6 median samples: 883
HMD driver recommended: 3202x2633 90.0Hz HiddenArea(8.73%) = 1385 MP/sec
Raw ideal render target scale = 0.64
New render target scale = 0.64 = 2562x2106. Total CPU time 0.10 seconds.
******************************************* End GPU speed *******************************************

(3852*3291)/(3202*2633) = ~1.50


It may seem to be a lot, but I would estimate it could have the real impact about 1-2 ms additional rendering time on each frame (on my setup), which is with PP On about 16 ms, so the slow down would be about ~15%. This is noticeable, but I would not call it "colossal" nor "struggle worthy" especially with RTX 2080 Ti.

Now considering the multi-view rendering on RTX series, this will definitely help as Pimax HMD geometry uses divergent views, which cannot be rendered using stereo rendering which is supported by Pascal. (Here is my post about Pimax geometry on Pimax forum: https://forum.pimaxvr.com/t/some-thoughts-on-the-ipd-discrepancy/14754)

The problem is that such an optimization will help only on the RTX series, and Pascal owners (myself included) will be left out in the cold. So while it may sound strange, possibly better solution would be to implement stereo rendering (which runs on both Pascal and Turing) and keep stereo projection ON, which would probably address much larger user base.
 
  • Like (+1)
Reactions: NW3
Apparently, Pimax has already done their homework, although we may ask if they were successful, and now it is up to Frontier. I have been poking about Pimax rendering pipeline lately (on my 5k+) using ED as a testing vehicle, so I have some data, if anyone is interested:

The hit of parallel projections on seems to be 50% pixels more on my setup (1080 Ti). Here are the reports from vrcompositor:

Code:
PP on, Pi Tool 1.0, Normal FOV


****************************************** Begin GPU speed ******************************************
MeasureGpuMegaPixelsPerSecond(): Returning 885 MP/sec. Total CPU time 0.10 seconds.
GPU Vendor: "NVIDIA GeForce GTX 1080 Ti" GPU Driver: "25.21.14.1771"
GPU speed from average of 6 median samples: 883
HMD driver recommended: 3852x3291 90.0Hz HiddenArea(8.73%) = 2082 MP/sec
Raw ideal render target scale = 0.42
New render target scale = 0.42 = 2496x2133. Total CPU time 0.10 seconds.
******************************************* End GPU speed *******************************************


PP off, PiTool 1.0, Normal FOV


****************************************** Begin GPU speed ******************************************
MeasureGpuMegaPixelsPerSecond(): Returning 885 MP/sec. Total CPU time 0.10 seconds.
GPU Vendor: "NVIDIA GeForce GTX 1080 Ti" GPU Driver: "25.21.14.1771"
GPU speed from average of 6 median samples: 883
HMD driver recommended: 3202x2633 90.0Hz HiddenArea(8.73%) = 1385 MP/sec
Raw ideal render target scale = 0.64
New render target scale = 0.64 = 2562x2106. Total CPU time 0.10 seconds.
******************************************* End GPU speed *******************************************

(3852*3291)/(3202*2633) = ~1.50


It may seem to be a lot, but I would estimate it could have the real impact about 1-2 ms additional rendering time on each frame (on my setup), which is with PP On about 16 ms, so the slow down would be about ~15%. This is noticeable, but I would not call it "colossal" nor "struggle worthy" especially with RTX 2080 Ti.

Now considering the multi-view rendering on RTX series, this will definitely help as Pimax HMD geometry uses divergent views, which cannot be rendered using stereo rendering which is supported by Pascal. (Here is my post about Pimax geometry on Pimax forum: https://forum.pimaxvr.com/t/some-thoughts-on-the-ipd-discrepancy/14754)

The problem is that such an optimization will help only on the RTX series, and Pascal owners (myself included) will be left out in the cold. So while it may sound strange, possibly better solution would be to implement stereo rendering (which runs on both Pascal and Turing) and keep stereo projection ON, which would probably address much larger user base.

Good data.

If your setup with PPC has a motion to photon latency of 16ms that overshoots the native refresh rate v-sync by 4.9ms which means constant reprojection.
 
Personally, I would like to see Fdev resolve the current VR issues rather tending to the less than 1% with VERY high End PCs and a 5k Headset

WMR support would be a good start
Issues with the extra resolution of the Odessey
HUD Colors
Galmap and various screens not optimised for VR
In game virtual keyboard
Steam switch window support broken since 3.0
Over bright cockpit
Etc.
Etc.

o7
 
Personally, I would like to see Fdev resolve the current VR issues rather tending to the less than 1% with VERY high End PCs and a 5k Headset

WMR support would be a good start
Issues with the extra resolution of the Odessey
HUD Colors
Galmap and various screens not optimised for VR
In game virtual keyboard
Steam switch window support broken since 3.0
Over bright cockpit
Etc.
Etc.

o7

If this affects less than 1% and other issues are more pressing then fair enough, but not everyone with a 5K+/8K has a high end PC, I am running with a 3rd gen i7 / DDR3 Memory / 1080GTX and the 5K+ is usable at 120 FoV with medium settings.
It seems to be a misconception that a very high end PC is required for 5K+, plus when they release the next version of their software, I might be able to run at 150 FoV due to their "brain warp" implementation.

I only make this point as others who want to try those head sets but don't have the latest hardware might think they are excluded, when actually the headset allows for reducing FoV, halving frame rate (next version) and so on, making it accessible to more people.
 
If this affects less than 1% and other issues are more pressing then fair enough, but not everyone with a 5K+/8K has a high end PC, I am running with a 3rd gen i7 / DDR3 Memory / 1080GTX and the 5K+ is usable at 120 FoV with medium settings.
It seems to be a misconception that a very high end PC is required for 5K+, plus when they release the next version of their software, I might be able to run at 150 FoV due to their "brain warp" implementation.

I only make this point as others who want to try those head sets but don't have the latest hardware might think they are excluded, when actually the headset allows for reducing FoV, halving frame rate (next version) and so on, making it accessible to more people.

I'm running on normal FOV with my 2070 on setting between VR low and normal. You should be able to run it on the normal FOV.
 
I'm running on normal FOV with my 2070 on setting between VR low and normal. You should be able to run it on the normal FOV.

I have run it using normal FoV, but it is not as smooth, I think the old CPU and DDR3 is limiting, hence the interest in this topic on Parallel projection.
 
For those interested in the topic, expect you will also be interested to know the new PiTool is available here: https://pimaxvr.com/pages/pitool

Includes: Adding Brainwarp new features, such as refresh rate switching and Smart Smoothing function.


New Pitool update.
 
New Pitool update.

I have found the latest beta to be an improvement, while out exploring on my old 2012 PC (3770 i7, 16G DDR3 1600, 1080 GTX), I can run 150 FOV. pitool 1.25 and SteamVR 50%. Nice smooth experience. I'm very happy being able to run these settings on such an old PC.

For those with a 5K+, the latest beta brings brightness and contrast settings.
 
Have tested the latest PiTool and firmware update and im happy to report that they have knocked it out of the park with this one !

The distortion profile in Normal FOV is now spot on - edge to edge. I was actually pretty shocked at the improvement and did my best to try and discern any blur/wobble with some averted vision eyeball gymnastics. There is none.

64Hz is usable for sims. The only time you notice the latency is in high speed head movements which is not something you do a lot of in a cockpit. Render workload is reduced by nearly 30% so allows for increased render target on well optimised engines.

Previously the brightness was dimmed considerably when selecting the 64Hz refresh rate but now that there are gain controls available for brightness and contrast you can jack it back up to nominal.

Of course 90hz is still king but if you are struggling to make frametime its great to have the lower refresh and avoid reprojection.

I didnt test FFR or motion smoothing because, meh.

gg Pimax (y)
 
Last edited:
Top Bottom