I'd have to speculate that getting the reticle comfortable should be one of the simpler aspects of it all - should be a single raycast to the targetted point on an object (...alternatively depth buffer lookup), in order to determine the distance, and consequently how much stereo separation to draw it with, matching that of the point on the object -- the reticle does not resize.
Doing it all with the 3rd person camera is probably a more fiddly prospect than just doing it right to begin with, using the main game camera (stereo pair version), in-my-humble-armchair-developer-musings.
We need:
Optionals:
Doing it all with the 3rd person camera is probably a more fiddly prospect than just doing it right to begin with, using the main game camera (stereo pair version), in-my-humble-armchair-developer-musings.
We need:
- Player camera object replaced with stereo pair player camera object.
- Reticle overlay aligned and parallax corrected, as mentioned above.
- The quads that other HUD elements are drawn on to be arranged in 3D space (not screen buffer space), in such a way that they are comfortably and unobstructingly positioned and aligned within your view.
- Add headturn to mouse(-or whatever)look on yaw, and let it replace other input on pitch.
Optionals:
- Allow camera roll, and maybe add headturn roll animation to player avatar. If unrestricted. the latter could have impact on avatar animation and IK.
- Allow player real-world translation to translate player avatar in two or three axes, whilst clamping max speed of motion, so that one can not add one's real-world translating to the avatar's "drive" velocity, for a short and cheat-y superhuman spurt into cover. Colliders still stops avatar dead, same as with all other game motion, even if the player still moves in their playspace -- if this makes them nauseated, they have only themselves to blame, for trying to walk through a wall. If allowing ducking: Either swap between game upright, and crouched state, at a threshold height (Skyrim does this), or make it follow contigously; The latter would add IK issues, like mentioned in the preceding point.
- Draw the inside of the helmet. This could also help some with motion sickness a little, by providing a visual anchor, like the cockpit does in-vessel.
- Subdivide the HUD quads and curve them, if so desired, to help with readability on low resolution headsets. (Unless there are (for some reason) shader-level effects on the HUD, this makes no difference to complexity - a bitmap being UV-mapped to a complex mesh makes it no different from its self being UV-mapped to a single pair of triangles.)
- Do not lock the helmet motion exactly 1:1 with the player's head, leaving a little room to look around inside it. If this is done: A) Optionally parent the HUD to the helmet, rather than directly to the player's head. B) ...there arises the need to determine just how the helmet follows headlook; delays, inertia, motion range limits, overflows of those propagating into avatar turning (already happens with mouse/stick-look), etc...
- On the matter of the screenspace-reflections-in-the-visor effect... First of all one have to question whether there should realistically be any reflections at all of the outside, on the inside the helmet... If one absolutely must have them: Do they really need to be in proper stereo, or could they not be accepted being just their mono selves mapped identically to the visor mesh for both eyes, appearing as an additively drawn 2D picture on its surface? These reflections are inherently not physically correct in the first place anyway - only raytracing can produce that.
- EDIT: Condensation needs to be drawn on the inside of the visor (...or a decal applied to it), but this essentially falls under the previous HUD positioning point.
- The whole thing with motion controllers, but then we begin to get into the whole thing with a "proper" VR implementation.
Last edited: