So here's a question for you graphic gurus - the ray tracing I'm familiar with (generating photorealistic scenes from software like RenderMan) requires the entire scene to be used, not just what's put on the screen. An example of this would be when we fly towards the sun at a certain angle, we should see our own reflections in our cockpit glass as the bright sun reflects off our holomes. This is why it can take hours if not days (depending on your hardware) to render a single frame. I have to think that the video game version of raytracing is "raytracing lite", because this seems like a tremendous burden to handle in real time.
In a game like Elite, you could probably get away with multi-pass selective ray-tracing, where things inside the cockpit are illuminated by local stars and perhaps even close planetshine, but not every light coming from inside a station... I'm just really curious now what video game raytracing is and how it's done compared to "real" raytracing (the latter sometimes taking shortcuts when LOTS of light sources are in play).
In a game like Elite, you could probably get away with multi-pass selective ray-tracing, where things inside the cockpit are illuminated by local stars and perhaps even close planetshine, but not every light coming from inside a station... I'm just really curious now what video game raytracing is and how it's done compared to "real" raytracing (the latter sometimes taking shortcuts when LOTS of light sources are in play).