Not the hardware requirements. The software requirements. Today's VR has higher resolutions, higher framerates and higher expectations from players. So it is more expensive to support in software. Braben in the press release:
They've likely engaged in some serious software engineering, and decided VR support on launch was not worth the extra time and money.
Well you'd said "And quite likely cheaper to do back then. Today's hardware is more demanding, and players have other games to compare it to. " so I'd read that as a hardware query, but fair enough, if it was the you were talking about my answer would have been irrelevant. The software side of putting VR in a game wouldn't have changed much if any, its still going to be steam VR / Oculus API that gets accessed to make a game communicate with the VR headset, and those have remained pretty consistent and backwards compatible if not totally unchanged over the years.
The resolutions / framerate haven't gone up that much, sure the HP Reverb is 2160x2160 per eye, but there was the pimax at 3840x2160 per eye launched not long after the oculus rift / htc vive, and the rift has always used 90hz refresh, which is about the norm for most headsets now. But increases to resolution or framerate don't burden the games developers, they put extra computational load on the computer, but its the same code.
Basically as I understand it, and overly simplified it for a late night forum post, the game gets
Intra
Pupulary
Distance (the distance between your eyes), head orientation (yaw pitch roll) and position of the headset in in X + Y + Z space from the VR API, and thus ascertains what the player can see. It then says to the gfx card render me this picture at this resolution (left eye) and this one of the same object but with the camera
IPDmm to the right of the previous image's viewpoint, difference in left/right eye height due to headset roll (leaning left or right) is also a factor here but lets keep this simple. The resultant picture is then sent through the VR API where the VR software then sends the image and the corresponding cued sounds to the headset.
As long as the game has the assets modelled to a decent resolution, and textures mapped to a decent resolution, and engine that supports camera rotation (pretty much everything since the original doom and Wolfenstein on dos) and the API hooks in the code, the game will run VR and all the heavy lifting of the VR gubbins such as talking to the different headsets resolutions/screen refreshrates / positional tracking software & sensor setup is done by the oculus or steam VR client. The higher resolution the headset is, the more complex the equations the GFX card has to do to render those pictures, the higher the refresh rate, the more frequently, as in less time per picture, the gfx card has to render the pictures.
The above might be slightly out, it might be the API asks the game for a render of a picture from this position and direction for one eye, and the same for another eye, and the game talks to the gfx card and sends the picture data to the API to send to the headset, but the fact remains the heavy lifting of interfacing with the headset directly, and all its hardware requirements (resolution/refresh rate & sensor data) is handled by the VR client.
With regards to the assets being sufficiently high resolution - David Braben has said that elites assets are modelled at 16k resolution, although I suspect that is not what is shipped in game, it is available for future tech and game updates.
The trick you mentioned about scaling the galaxy from light years to centimetres, it may have been reworked in odessy, but its something the game has done since launch, and done it very well. Essentially you could go to a planet surface, and land at an NPC base, shoot up some skimmers with centimetre tolerance for their position and your weapons fire, and or scan a data point, again, centimetre level accuracy for targeting and scanning the reciprocating antenna, and that's all at one scale the super accurate sub cm you mention. So you then jump into your ship and make a break for it, a couple of km up you engage supercruise, now you are dealing on a scale of a couple of squared/cubed lightyears for the system area. You hit your nav pane and go into galaxymap, plotting a route to a system a couple of jumps away, and start jumping to get there. Each time you jump, the skybox renderer is working on a scale of thousands of light years to render the background stars relevant to where the star system you are in is located. You can see this clearly approaching the pleiades nebule, when you see the nebula getting bigger and bigger in the last couple of jumps.