Even in Gamma, the performances in Rift are ridiculously slow, play at your own risk

Your final point about performance...... IF CV1 was 1440P and comes out this time next year, sure if you want full bubble you will be looking at a v high end PC, but the thing is, that is the beauty of PCs. Moore's law sadly is no longer true, but even so, hardware is still advancing reasonably rapidly, and prices are coming down.

What is expensive top tier in December 2015 will be mid range gaming pc in December 2016, and entry level in 2017, and probably around mobile phone performance then.

We do not know the resolution on the CV1 will be 1440p. That has been a guess. Heck, we don't even know what resolution the Crescent Bay prototype is using. People who have tried it said it was a major improvement, but Oculus has not released all the details. What we do know is that it's 90 hz, so we will require 90 fps for each eye. That means, in practice, that we need to have enough computing power to produce 180 FPS on a single monitor. These are certainly monstrous demands, and I can only think that SLI is the only way forward, once the gpu manufacturers implement VR SFR for SLI. SFR = Split Frame Rendering. I think 1440p is likely, but I am hoping for even more. They don't need to follow the strict pixel format regimens used for monitors. They can (and will), with the aid of Samsung, develop their own proprietary OLED display. They can decide the number of pixels, aspect ratio etc. Though it's certainly beneficial if the pixels are within a specific range, and divisible by certain numbers.

Furthermore, John Carmack went into great detail about wanting to push Samsung's hardware division to work on a brand new form of refresh rate. We are currently using the ancient VBLANK refresh system, which is a vestigial remnant of the CRT monitor era when monitors refreshed a single line vertically. What Carmack has proposed, is a dynamic refresh rate. Not dynamic in the sense that G-Sync / Freesync does it. But dynamic in the sense that some pixel lines may not need to turn on at all. He says that he is confident that his idea of a dynamically interlaced 60 Hz refresh rate would be just as good as a full refresh rate of 90 Hz. That would mean, for practicality, that we would need no more than 60 FPS to get the best experience. He also says this would require some changes to the rendering process, but I'm sure Quietman knows 100x more about this than me. I don't really understand it, but what I think he means is dynamic rendering (only rendering updated pixels to whatever part of a frame changes in the scene), and dynamic refresh rates that tries to follow the newly rendered pixels. I hope Quietman can elucidate this though.

Carmack says that they have had some trouble trying to convince Samsung hardware division about this. He does say that they have a completely different relationship with Samsung's software division. So, hopefully Carmack will be able to "lobby" Samsung to give his idea a shot.

Here is the talk. The topic in question goes from 06:00 to 21:00
WARNING, THIS VIDEO REQUIRES CONCENTRATION. WATCH IT INTENTLY OR NOT AT ALL :)

[video=youtube;gn8m5d74fk8]https://www.youtube.com/watch?v=gn8m5d74fk8#t=362[/video]

We also do not know if the release date of the CV1 is November-December next year. These are simply speculations. The release date of the CV1 could be anywhere from May to September for all we know.
 
Last edited:
sure I agree...... I was trying to be conservative however. I believe I saw an interview with palmer lucky and he stated that the resolution of CV1 WOULD be higher than 1080p and that they would have failed if they do not get it out before end of 2015

ergo I think a safe bet to aim your expectations would be 1440p and end of next year. sorry if I came accross as stating a fact however.

edit... looking back it did put IF in capitals ;)

edit again
too the guy who repped me asking about my performance.. I actually run at 72hz, on my low end rig it works better, its a nice compromise between being blurred and being jerky. I have not benchmarked it, however once everything has loaded in and the frame rate settled - which takes a few seconds, the frame rate is nice on my rid WAY better than beta was for me.

either way however my card is hopefully out and retired this time tomrrow.
 
Last edited:
Sounds like low persistence is not kicking in. There are two low persistence modes, one for 72 FPS and the other for 75 FPS. If you drop below these frame rates you get smearing like on the DK1 (which is different than the black to purple smearing that the DK2 suffers from in certain situations). Your CPU at 2.0 GHz is possibly the culprit but see if you can measure your frame rate using afterburner or similar

Well did some experiments with my old Q6600 overclocked to 3.0Ghz and with the Xeon machine, swapping the GTX 980 between them. In the Rift the Q6600 even at 3.0 Ghz could only muster 24 fps in the station yet was barely behind the Xeon on 3Dmark tests (did horrible on anything CPU or Physics though, the Xeon just smoked those). Out of the rift the Q6600 smokes along fully maxed even with my old GTX 580's in SLI.

Back on the Xeon even at 2.0Ghz it gets a perfect 75 fps in station on full high with blur disabled at 1920x1080, so I'm impressed! Only the rare dip to 74/73 fps and once out of the station it's a rock at 75fps. The strange blur is still there though and I'd swear it looks like a "feature" that needs to be disabled.

I'm going to try to setup a spare card to act as primary so I can run the GTX 980 dedicated to the rift and see if it will let me turn off VSYNC and see what happens (won't let me currently, its in red even in extended mode).


EDIT: used Nvidia Control Panel to force disable VSync. Now I get 133 fps out of station and 89 to 95 in station. Slight annoying blur when moving head in any direction still there but definately playable. Can't tell if Vsync helped much buy at least it doesn't dip to 74/73 fps now lol. Christ I'd love to see a loaded 4.0ghz monster system with a GTX 980, it would be crazy. I also get 400fps at the menu screen lol.

Cheers
Tim

P.S. Going to take a while to save up for a loaded X99 system so looks like the Xeon is it for a while.
 
Last edited:
Furthermore, John Carmack went into great detail about wanting to push Samsung's hardware division to work on a brand new form of refresh rate. We are currently using the ancient VBLANK refresh system, which is a vestigial remnant of the CRT monitor era when monitors refreshed a single line vertically. What Carmack has proposed, is a dynamic refresh rate. Not dynamic in the sense that G-Sync / Freesync does it. But dynamic in the sense that some pixel lines may not need to turn on at all. He says that he is confident that his idea of a dynamically interlaced 60 Hz refresh rate would be just as good as a full refresh rate of 90 Hz. That would mean, for practicality, that we would need no more than 60 FPS to get the best experience. He also says this would require some changes to the rendering process, but I'm sure Quietman knows 100x more about this than me. I don't really understand it, but what I think he means is dynamic rendering (only rendering updated pixels to whatever part of a frame changes in the scene), and dynamic refresh rates that tries to follow the newly rendered pixels.

I have watched this video before, but I watched it again and this time with extreme INTENT :)

I'll try and keep my explanation of my understanding of JC's interlaced display and dynamic pixel persistence as simple as possible, (this is tough for me to do though ;P); but firstly I think we need to better understand what all these terms exactly mean.

- Interlaced displays as he actually stated were something invented long ago to fool the eye/brain into seeing/thinking something that really isn't there. The problem with old displays was that due to hardware and bandwidth limitations, content was limited to being updated at either 25 or 30 frames per second, (e.g. PAL/NTSC); you either update at 25/30 FPS with a low persistence phosphor and suffer a strobing/flickery mess or you update at 25/30FPS with a high persistence phosphor, (like a CRO, Cathode Ray Oscilloscope), and suffer a blurry/smeary goo.

There is a third option, interlacing, which instead of sequentially traversing the scan lines from 0-525 in order with each line being displayed at 1/30 second intervals, you instead display all the even number lines first, 0,2,4...522,524 then display the odd numbered lines second 1,3,5...523,525 at 1/60 second intervals. The content you are displaying hasn't changed in the least, all you have done is change how you display it, before we were displaying the content at 30 frames per second, now we are displaying the same content at 60 fields per second, (if you're trying to visualise this, but you can't, take your two hands, splay all your fingers apart, then bring your hands together merging your fingers into an interlaced mesh).

What interlacing does is fool the eye and brain into thinking that 30 frames per second is 60 frames per second, (this reduces flicker on low persistence displays). The problem with interlacing in the past and the problem J.C. will also have to contend with is content. Any content that does not span at least two scan lines will flicker at a 30Hz rate, this is because the reason interlacing reduces flicker in the first place is that the eye and brain average and merge information from the two separate 60Hz fields and fool you into thinking it is 60Hz content. Anyone who remembers the old analog TV standards will also remember when TV stations started using computer graphics in their News and Weather presentations or Sports channels, whenever they used thin horizontal lines or content that had high contrast on horizontal edges, they would flicker noticeably compared to the rest of the content. The eye and brain are fantastic at spotting differences and small rapid changes in the visual spectrum, (part of our fight or flight survival mechanism), so this flicker effect is exacerbated even more.

J.C. then mentions expanding this system even further, so that instead of having two fields at double the rate, you could expand to say twenty fields at twenty times the rate. If we use 50Hz as our base frame rate, (instead of 30Hz), we now have a system that is able to take 50Hz content and display it to the eye and brain as if it was magically recorded at 1000Hz! But doesn't the content now have to span at least twenty scan lines to use the averaging trick to fool your brain? Yes, exactly! This is where variable or dynamic pixel persistence comes in.

- Phosphor or pixel persistence is the ability of a display device to emit light at a pixel level that decays over time, (usually in the millisecond range), to help combat flicker when viewed by the eye and brain; it was also a fundamental property of the luminescent surface coating painted onto the tube of every CRT. Adjusting how long a phosphor/pixel emits light for is crucial, too little and your eye and brain perceive flicker, too much and your eye and brain perceive blur and smear, (relative to the frame rate of your display device). It's an extremely researched and subjective area of display technology that is remarkably difficult to get right for all people at a consumer cost and level.

The DK2 has two modes of persistence, DK1 high persistence mode and DK2 low persistence mode, as anyone who has run 60Hz content in high persistence mode on the DK2 knows, it's bright, over saturated and a blurry mess. Low persistence mode drives the individual pixels with a much shorter but larger pulse, this allows them to have an average intensity that is similar to high persistence mode, (not as bright but not remarkably different), but with a much shorter decay thus reducing blur and smearing. The problem with low persistence mode is that it exacerbates the flickering issues, as the eye and brain have less total available light to average over per pixel.

This is where JC wants to control each individual pixel's overall brightness, relative brightness, (dynamic range), and decay. JC also wants to control these parameters to allow fine grain control over High Dynamic Range, but that is a completely separate issue that requires it's own thread :)

By allowing individual control over a pixel's phosphorescence, (decay), you can now heuristically adjust the individual decay of each pixel based on your frame rate, your field rate and the content you are trying to display. This allows you to minimise flicker, minimise smearing and blurring and provide a display that can be driven by a 50 frame per second engine that is perceived by the eye, brain and vestibular system as a 1000 frame per second display, without having to generate content at 1000 frames per second.

P.S. The FD devs need to expose the low and high persistence modes in the ED graphics options, (it's a trivial 1-2 line change to add, apart from the UI changes of course), so that Rift users can experiment with the persistence mode at different vertical sync rates.

P.P.S. Armchair enthusiasts and the peanut gallery are always arguing about whether JC is worthy of being described as a genius; I will leave you with this, he was able to single-handedly prove to hundreds of HW and SW engineers at Samsung that their product was capable of doing something that they said it was not, (when he reduced tracking latency from 20ms to 6ms on the VR Gear by directly accessing the front frame buffer). I won't even mention his past contributions and accomplishments to gaming, graphics and the advancement of GPU technology.
 
Last edited:
Quietman, did you read my penultimate post about procedurally generating planet biomes? #37

P.S. The FD devs need to expose the low and high persistence modes in the ED graphics options, (it's a trivial 1-2 line change to add, apart from the UI changes of course), so that Rift users can experiment with the persistence mode at different vertical sync rates.

Interesting.. that easy?

P.P.S. Armchair enthusiasts and the peanut gallery are always arguing about whether JC is worthy of being described as a genius; I will leave you with this, he was able to single-handedly prove to hundreds of HW and SW engineers at Samsung that their product was capable of doing something that they said it was not, (when he reduced tracking latency from 20ms to 6ms on the VR Gear by directly accessing the front frame buffer). I won't even mention his past contributions and accomplishments to gaming, graphics and the advancement of GPU technology.

You won't hear any argument from me. ;) I definitely think "genius" of Carmack.
 
Last edited:
I have an idea.. instead of "solving" VR, how about we instead solve world hunger, or green energy, or interstellar travel? Just saying... :p
 
I have an idea.. instead of "solving" VR, how about we instead solve world hunger, or green energy, or interstellar travel? Just saying... :p

It's not "instead." It's "also." Theres little point for a programmer or artist to try themselves at figuring out plasma containment of a fusion reactor. We have to do what we can, but there's room for everything according to our specialty.
It's off topic regardless.
VR can be useful in treating various phobia, in training for welding, crane operation, rov piloting, for education, etc.
 
Have you considered how enviromental friendly VR will be once it gets good enough for everyone to be able to enjoy destructive enviroment unfriendly hobbies in VR instead of doing them in the real world?
 
Have you considered how enviromental friendly VR will be once it gets good enough for everyone to be able to enjoy destructive enviroment unfriendly hobbies in VR instead of doing them in the real world?

This is a good point - how many air and road miles are just to bring a group of people together into the same room for a meeting ? Governments should be pumping money into this kind of technology instead of building yet more roads and new airports....
 
This is a good point - how many air and road miles are just to bring a group of people together into the same room for a meeting ? Governments should be pumping money into this kind of technology instead of building yet more roads and new airports....

One thing you realise when you try VR is just how tenuous our grip on reality actually is. How strange it is to go into galaxy map, system view and zoom in towards the star until it's just touching your face. I swear I "feel" something on my face in a weird way, even though there's nothing physically there.
 
On top of that, the highest difficulty of procedural generation would be to generate wildlife that we have never seen before. The greatest challenge is to generate lifeforms which has a tenable center of gravity with regards to their limb structures and locomotion, as well as procedural animation. But these are not unheard of. Procedural animation has already been done, and is being used more than people think. Much of GTA V's animation system is actually generated semi-procedurally (non-linear animation wise).

Absolutely agreed, this is one of the major barriers to 100% procedural content generation; generating accurate life and life cycles and then animating it all realistically. Although motion planning with inverse kinematics developed initially in the robotics field sure has come a long way in the animation department.

I'll never forget the first time I played GTA IV, the city's residents reactions and animations to external events was the first, (and really only), feature that struck me as truly impressive. Euphoria is an impressive piece of kit and really shows just how far procedural animation has come in the last few years.

Quietman, did you read my penultimate post about procedurally generating planet biomes? #37

Somehow I missed it, but I've obviously read it now :). I have yet to procedurally simulate Biomes, although I have done a lot of procedural work with fractals, FBM's, CSG, Ray Marching, etc. Your description of simulating complex output by using complex nesting of simple procedurals, sounds both logical and extremely doable; if I ever get around to coding up a planet generator, expect some private messages ;P

Interesting.. that easy?

Code:
unsigned int caps = ovrHmd_GetEnabledCaps(hmd);
ovrHmd_SetEnabledCaps(hmd, caps | ovrHmdCap_LowPersistence);
 
Last edited:
Back
Top Bottom