Furthermore, John Carmack went into great detail about wanting to push Samsung's hardware division to work on a brand new form of refresh rate. We are currently using the ancient VBLANK refresh system, which is a vestigial remnant of the CRT monitor era when monitors refreshed a single line vertically. What Carmack has proposed, is a dynamic refresh rate. Not dynamic in the sense that G-Sync / Freesync does it. But dynamic in the sense that some pixel lines may not need to turn on at all. He says that he is confident that his idea of a dynamically interlaced 60 Hz refresh rate would be just as good as a full refresh rate of 90 Hz. That would mean, for practicality, that we would need no more than 60 FPS to get the best experience. He also says this would require some changes to the rendering process, but I'm sure Quietman knows 100x more about this than me. I don't really understand it, but what I think he means is dynamic rendering (only rendering updated pixels to whatever part of a frame changes in the scene), and dynamic refresh rates that tries to follow the newly rendered pixels.
I have watched this video before, but I watched it again and this time with extreme INTENT
I'll try and keep my explanation of my understanding of JC's interlaced display and dynamic pixel persistence as simple as possible, (this is tough for me to do though ;P); but firstly I think we need to better understand what all these terms exactly mean.
- Interlaced displays as he actually stated were something invented long ago to fool the eye/brain into seeing/thinking something that really isn't there. The problem with old displays was that due to hardware and bandwidth limitations, content was limited to being updated at either 25 or 30 frames per second, (e.g. PAL/NTSC); you either update at 25/30 FPS with a low persistence phosphor and suffer a strobing/flickery mess or you update at 25/30FPS with a high persistence phosphor, (like a CRO, Cathode Ray Oscilloscope), and suffer a blurry/smeary goo.
There is a third option, interlacing, which instead of sequentially traversing the scan lines from 0-525 in order with each line being displayed at 1/30 second intervals, you instead display all the even number lines first, 0,2,4...522,524 then display the odd numbered lines second 1,3,5...523,525 at 1/60 second intervals. The content you are displaying hasn't changed in the least, all you have done is change how you display it, before we were displaying the content at 30 frames per second, now we are displaying the same content at 60 fields per second, (if you're trying to visualise this, but you can't, take your two hands, splay all your fingers apart, then bring your hands together merging your fingers into an interlaced mesh).
What interlacing does is fool the eye and brain into thinking that 30 frames per second is 60 frames per second, (this reduces flicker on low persistence displays). The problem with interlacing in the past and the problem J.C. will also have to contend with is content. Any content that does not span at least two scan lines will flicker at a 30Hz rate, this is because the reason interlacing reduces flicker in the first place is that the eye and brain average and merge information from the two separate 60Hz fields and fool you into thinking it is 60Hz content. Anyone who remembers the old analog TV standards will also remember when TV stations started using computer graphics in their News and Weather presentations or Sports channels, whenever they used thin horizontal lines or content that had high contrast on horizontal edges, they would flicker noticeably compared to the rest of the content. The eye and brain are fantastic at spotting differences and small rapid changes in the visual spectrum, (part of our fight or flight survival mechanism), so this flicker effect is exacerbated even more.
J.C. then mentions expanding this system even further, so that instead of having two fields at double the rate, you could expand to say twenty fields at twenty times the rate. If we use 50Hz as our base frame rate, (instead of 30Hz), we now have a system that is able to take 50Hz content and display it to the eye and brain as if it was magically recorded at 1000Hz! But doesn't the content now have to span at least twenty scan lines to use the averaging trick to fool your brain? Yes, exactly! This is where variable or dynamic pixel persistence comes in.
- Phosphor or pixel persistence is the ability of a display device to emit light at a pixel level that decays over time, (usually in the millisecond range), to help combat flicker when viewed by the eye and brain; it was also a fundamental property of the luminescent surface coating painted onto the tube of every CRT. Adjusting how long a phosphor/pixel emits light for is crucial, too little and your eye and brain perceive flicker, too much and your eye and brain perceive blur and smear, (relative to the frame rate of your display device). It's an extremely researched and subjective area of display technology that is remarkably difficult to get right for all people at a consumer cost and level.
The DK2 has two modes of persistence, DK1 high persistence mode and DK2 low persistence mode, as anyone who has run 60Hz content in high persistence mode on the DK2 knows, it's bright, over saturated and a blurry mess. Low persistence mode drives the individual pixels with a much shorter but larger pulse, this allows them to have an average intensity that is similar to high persistence mode, (not as bright but not remarkably different), but with a much shorter decay thus reducing blur and smearing. The problem with low persistence mode is that it exacerbates the flickering issues, as the eye and brain have less total available light to average over per pixel.
This is where JC wants to control each individual pixel's overall brightness, relative brightness, (dynamic range), and decay. JC also wants to control these parameters to allow fine grain control over High Dynamic Range, but that is a completely separate issue that requires it's own thread
By allowing individual control over a pixel's phosphorescence, (decay), you can now heuristically adjust the individual decay of each pixel based on your frame rate, your field rate and the content you are trying to display. This allows you to minimise flicker, minimise smearing and blurring and provide a display that can be driven by a 50 frame per second engine that is perceived by the eye, brain and vestibular system as a 1000 frame per second display, without having to generate content at 1000 frames per second.
P.S. The FD devs need to expose the low and high persistence modes in the ED graphics options, (it's a trivial 1-2 line change to add, apart from the UI changes of course), so that Rift users can experiment with the persistence mode at different vertical sync rates.
P.P.S. Armchair enthusiasts and the peanut gallery are always arguing about whether JC is worthy of being described as a genius; I will leave you with this, he was able to single-handedly prove to hundreds of HW and SW engineers at Samsung that their product was capable of doing something that they said it was not, (when he reduced tracking latency from 20ms to 6ms on the VR Gear by directly accessing the front frame buffer). I won't even mention his past contributions and accomplishments to gaming, graphics and the advancement of GPU technology.