I just used the ctr+f.
Supposedly from what i garnered from some reddit posts via google, this number should report actual numbers from the rendering pipeline. But I am not sure about that, since it seems a lot smoother than i consider 45 fps before.
This is true from point of view from the ED render engine. If SteamVR is recognizing that the application cannot keep up with 90fps it will drop to 45fps and use the reprojection feature to insert an interpolated frame, which will handle movement caused by the HMD quite nicely, but will never be able to catch any application induced movements like fast moving objects (other spaceships) or when you do a close flyby on a station.
If you monitor GPU usage with a tool like GPU-Z you will see that the GPU is dropping to ~45% usage in these situations, because is has now only half the work to do.
I see you are using an i5. At what frequency does it run ?
I am using an i7 4770K and a MSI 980Ti Gaming. Originally I run the CPU at the factory clock speed of 3.5Ghz ( It has a boost speed of 3.9 but only if one core is active, which is not the case with ED). With this I had really problems to achieve 90fps and I got this only with VRLow and even then I had 45fps in stations e.g.
I then looked into overclocking and was massively surprised to see how much effect this had. I am now running at 4.2Ghz on all 4 cores and also adjusted the cache clock to match the core clock. You can see my timings in various scenarios and settings here.
Only adjusting the cache clock to match the core clock ( from 800MHz to 3.5 GHz) did a huge difference.
The measurements I did also suggest (at least to me) that the CPU part is quite a huge factor for getting the 90fps, but not in sheer computing power, but more in a synchronization issue between CPU and GPU. The improvement by increasing the clock speed might only be because then the sync problem is just barely avoided. I guess there might lie some optimization potential in this area. Lets hope FDev can find the time to address this properly.