Ideally you'll want a full consistent 90 fully rendered frames per second. In stations, with a lot on screen, that's difficult, even for very high-end systems to achieve.
Any time your 3D card fails to render in time for the Rift'/Vive's 90fps fixed frame rate, reprojection/ATW will kick in and show a warped frame based on your latest head movment data. Thus the frame rate immediately drops to 45fps because that's the number of real updates the HMD is getting each second.
While the reprojected frame is being displayed, the 3D card will spend the time idling, so its utilisation will be lower (like 70% or so as you've seen). The CPU is already generating the next frame scene geometry, so it should be at 100% or thereabouts for involved cores.
Then its on to the next frame as geometry data comes in from the CPU.
Of course, it can be the CPU at fault - if the CPU is taking too long to generate the scene, the 3D card won't finish in time. If the 3D card is trying to render at high detail settings, and high pixel density/ high super-sampling etc, then it too can miss the 1/90th of a second / 11ms frame update deadline.
The Rift has dedicated HUD's for showing the latency/cpu/game render timings etc. Not sure about the Vive?