https://developer.oculus.com/documentation/pcsdk/latest/concepts/dg-render/#dg-render-layersSimilar to the way a monitor view can be composed of multiple windows, the display on the headset can be composed of multiple layers. Typically at least one of these layers will be a view rendered from the user's virtual eyeballs, but other layers may be HUD layers, information panels, text labels attached to items in the world, aiming reticles, and so on.
Each layer can have a different resolution, can use a different texture format, can use a different field of view or size, and might be in mono or stereo. The application can also be configured to not update a layer's texture if the information in it has not changed. For example, it might not update if the text in an information panel has not changed since last frame or if the layer is a picture-in-picture view of a video stream with a low framerate. Applications can supply mipmapped textures to a layer and, together with a high-quality distortion mode, this is very effective at improving the readability of text panels.
Every frame, all active layers are composited from back to front using pre-multiplied alpha blending. Layer 0 is the furthest layer, layer 1 is on top of it, and so on; there is no depth-buffer intersection testing of layers, even if a depth-buffer is supplied.
A powerful feature of layers is that each can be a different resolution. This allows an application to scale to lower performance systems by dropping resolution on the main eye-buffer render that shows the virtual world, but keeping essential information, such as text or a map, in a different layer at a higher resolution.
In other words, Elite could be doing high-resolution super-sampling on the HUD, while allowing users to undersample the rest of the game world, meaning a crisp HUD doesn't need to come with a huge performance requirement. Are they using this feature, and could they be leveraging it better?