I am not holding my breath, but still do hold out hope that the refinement pass after 2.4 may include some VR optimisations, in addition to the expected front-and-centre focus of gameplay mechanics.
There are quite a few things that could be done, such as having adaptive quality, that scales graphic fidelity up and down automagically, just as much as is needed to maintain 90 frames per second - this is one that could benefit non-VR play as well.
NVidia VRWorks (or equivalent) features is a less universal thing, but there are certainly quite a few cycles to be saved, by implementing things like single-pass-stereo (set the scene up once and rasterise two viewports (left and right eye) in one go, instead of preparing each separately), and lens-matched-shading (run pixel shaders at a lower resolution out in the periphery, where the screen is anisotropic to your eye, and the lens compresses the image, anyway).
These sort of things can be a little bit hairy, though, since you will probably want to be prepared for future HMDs, that do not have the particular characteristics and weaknesses of current ones, rather than tying yourself down to a particular proprietary solution to a problem that may be moot tomorrow. :7
Eye-tracking (also improving the HUD gaze-selection we already have today) and foveated rendering (only render at the highest detail in the spot where you are looking, since the photoreceptor distribution on the retina of a human eye extremely favours a very small area) will come; Lightfield displays will probably come (solving several problems); I'm not ruling out, and very much hoping for- real-time raytracing to supplant rasterising in game engines at some point (chips have been shown, as has fast raymarching).
-Plenty of new and intriguing stuff, that will to various degrees force engine writers to tear up their code and do things all over again. :7