Nvidia's Pascal - Looks like the CV1 might be feasible afterall

With claims that the Pascal tech is 10x more powerful than the Titan X, and coming out next year, I'm now much more optimistic about the CV1 resolutions being feasible. Perhaps even 4k resolutions are going to be possible in the next 2 years.

Go VR!
 
ill hold out for that VOLTA...

"Nvidia's next-next GPU chip, Volta, is slated for 2018. Reports it will be able to "power a small moon" "
 
Haha. Pascal for the CV1, Volta for when Oculus attempt 8k. Then we will have the same resolution as sitting in front of a 1080p screen, but in the rift at 120 FOV
 
honestly though, im happy if the screen door is gone and a single 980 can do cv1 @ 1440p

i think 1440p will be more than enough for the next year or so...4k in vr would literally need pascal.
 
The higher refresh rate is what has me spooked. I wonder why they thought 75 FPS wasn't enough? Seems perfectly fluid to me. 90 seems overkill. 1440p @ 90 FPS might take a Titan to drive.
 
Agree, SDE is the main problem IMO, and without it the AA will be able to be efficient as this is not the case with SDE. So 1440 or even 1080 will make me happy (with a single 970, the one I have). And I will do SLI with a second one if needed...
 
Lucky Palmer said basically the 980 is fine for cv1, consider that is what they are using now for development and show demos.
that cv1 demo with hobbit smaug was a single 980 iirc.
so thats why i bought a 980, and when the prices go down significantly ill get another.
 
The higher refresh rate is what has me spooked. I wonder why they thought 75 FPS wasn't enough? Seems perfectly fluid to me. 90 seems overkill. 1440p @ 90 FPS might take a Titan to drive.

The answer is somewhat complex. When you increase spatial resolution, you need to increase temporal resolution accordingly, or you lose detail during movement- you get a lot of smearing, and horizontal judders. This causes nausea even with- eg. early versions of Super HiVision at 7680x4320 at 60 fps were fine for locked-off shots, but complete hell if the camera was moving.

The way it's quantified in the lab is by referring to dynamic resolution, derived from the spatial (x/y) resolution and the temporal (framerate) it's quite possible for dynamic resolution to be lower in a higher spatial resolution setup. This is why you want to avoid 4kp25 like the plague :) Essentially the problem, if you consider a camera move, is that within a given time period you move N pixels- effectively jumping or smearing that many pixels. If the spatial resolution increases, you jump or smear more pixels per frame for the same "arc" proportion of the screen moved. Conversely, if you increase the framerate, there are more pixels of detail in that same "arc" of movement.. if you see what I mean? It works the other way around too, if you drop the spatial resolution, the movement looks more natural at the same framerate.

This isn't guesswork, and has been proven experimentally through separate research by the BBC's labs, and Philips in the Netherlands. I believe NHK have done similar work, and are now cranking up their minimum framerates for their 8kish stuff to 120fps.

It's a shame I can't link you to examples- but sadly you can't film this stuff, you really need decent high speed displays and high speed footage downsampled in different ways to illustrate it clearly, but the effect is very striking (I saw a demo some years ago with a train set shot at 4Kp600 that was simply awesome, even though it was just running at 1080p200). In a nutshell, the effect was that the further out of whack the framerate was, the less that small details could be clearly seen.

This stuff is vital. A scene perceptually losing detail during a camera move and then gaining it when it stops directly triggers a close analogue to the seasickness response. A major UK broadcaster discovered this through another route, when they started broadcasting football in HD. They had set their coder up to be very sharp, so when the camera panned, it went quite noticeably softer (I think this was with MPEG2 so I guess it'd be macroblock artifacts). This consistently made football viewers sicker than excessive consumption of terrible lager would suggest. Once the broadcaster actually worked it out, they set their coder up to be slightly softer- and the problem went away!

So yes, framerate is really bloody important for maintaining good dynamic resolution, and thus detail- and avoiding giving the user a nasty case of the whities. If you're dealing with a VR headset that takes your entire vision over, it's even more important than with just a television. There are few things more vital to making the experience good than framerate. If they increase the spatial resolution of the display without also boosting the framerate, the dynamic resolution will be worse, and it will be an inferior display in many ways, and more likely to cause nausea too. Oculus aren't just being showoffs, they understand the underlying science.

Note: the above is a very slapdash summary, so if you're familiar with how all this works, the underlying maths, and indeed the filter functions of the eye as regards video, don't yell at me. I was trying to pitch it at a suitable level for a nerdy lay audience :D
 
Last edited:
Yikes!! Actually - it all makes sense! As I think about it: somehow I'm thinking the display resolution shouldn't necessarily imply increased spatial resolution. I suppose it all depends on the rendering engine. I'm not fully familiar with the tech, but seems like if most of the rendering is vector based, and each frame is a slice of the viewport in time, then resolution shouldn't affect the fluidity of the scene as long as you can crank out pixels fast enough for a steady FPS. Sure, more FPS is always better, but intuitively it seems that 75 should be enough. Fascinating.

Anyway - it's mostly a curiosity for me. Thanks for the in-depth info!
 
Yikes!! Actually - it all makes sense! As I think about it: somehow I'm thinking the display resolution shouldn't necessarily imply increased spatial resolution.

I should clarify, "spatial resolution" is a term which means display resolution in the x/y plane -this monitor is a very conservative 1920x1200, for example- it's "normal" resolution, as opposed to the temporal resolution (framerate).

It makes no difference how the video is rendered, it's just about the pixels of the display's native resolution, and how often it redraws. It's all just pixels to the display controller/FRC. Imagine my display, 1920x1200/60fps, showing some moving video which is (for simplicity) moving left to right at 60 pixels per second. That's fine- every frame shows as much detail as it needs to. Now double the resolution (in both axes, if you wish, even though we're keeping in simple), showing the same scene at a higher resolution. The movement would be in the same "arc", as a proportion of the screen, but in the higher resolution, the same "arc" would have travelled 120 pixels in 60 frames- so the temporal representation of the change is less accurate. Essentially, to display the signal to the same level of fidelity, we'd want to double the frame rate too.*

Anything moving significantly too fast to be clearly represented by the current framerate is subject to "temporal aliasing" of a kind. Obviously, if it's a minor element of the picture, it's fine, but if it's the whole screen, most of the time, you're going to have a problem.

I wish I could show you the effect of this easily, but I don't have a bunch of specimen video to hand, but it's really noticeable. It is really obvious in an eye-poppping way, once you play with the parameters.

Anyway, to reiterate, it doesn't matter what the video signal is. Moving video rather than still images mean that the framerate becomes another aspect of the resolution, and if you gimp the framerate, bad things happen. Sporadically losing detail is bad, not only as it looks horrid, but it makes people feel sick.




*The relationship may not be quite that linear, I forget, but you get the basic idea.
 
Last edited:
Ancipital > Thanks for this detailed explanation, it makes things very clear. I was also wondering why they needed to improve the 75hz framerate which currently seems fine to me, now I understand.
 
Sadly there is no way that gaming performance will increase by 10x in a generation until we have a replacement for silicon.

As Hexus put it -

However it is noted that "this is just a very rough, high level estimate." It applies to at least one use case highlighted - 'Deep Learning' tasks. That sounds interesting but it feels like Nvidia really had to scrape around for use cases to come up with that headline figure which makes one wonder how much of an uplift it will represent to PC gamers, for example.

Nvidia's biggest problem with VR is latency and that's unlikely to be fixed for the next 4-5 years. Their current generation lacks basic features for doing VR properly, eg asynchronous compute engines. Hopefully Pascal will have something similar but I wouldn't bet on it.

They'll have some way of hacking it to make it acceptable in the meantime but don't forget the VR Direct stuff was supposedly "coming soon" 6 months ago. When Nvidia has nothing in the present they are really, really good at making people look to the future instead. There is also the not-insignificant matter of the 390X looming large on the horizon, and that does have all the stuff required for doing proper VR. Keep an eye out on Computex in a couple of months. ;)
 
Thanks for these explanations guys. I read about that BBC research a while back. The way I had it explained was that our eyes can't see things that are too fast and our visual cortex creates its own form of motion blur to compensation. With these extremely fast broadcasts at very high frequencies, we are being given images of things moving faster than our brains can normally process, but without the motion blur and this was leading to the whities. They had to introduce artificial blurring and motion blur to reduce the effect.

Fascinating!

I'm still worried that we are many years off the right kind of VR, and people are getting too excited. When I try to explain this to someone who isn't following, or hasn't tried it, they get angry at me. Like I'm wrong and pessimistic and I am the cause of their bubbles bursting. And I can see their point of view. The VR buffs and companies and journalists and the general hype are all presenting this picture as if it's going to happen any day now and it's going to be cheap and easy and we are all going to enter the world of perfect VR. What a let down this is going to create. By the time Volta is out (and cheap!), I think these promises may be realised. Before then, we've got a long way to go, both in terms of tech and cost.
 
Last edited:
Thanks for these explanations guys. I read about that BBC research a while back. The way I had it explained was that our eyes can't see things that are too fast and our visual cortex creates its own form of motion blur to compensation. With these extremely fast broadcasts at very high frequencies, we are being given images of things moving faster than our brains can normally process, but without the motion blur and this was leading to the whities. They had to introduce artificial blurring and motion blur to reduce the effect.

That sounds the opposite of what the conclusions were, in as much that it makes sense. Sorry if that sounds snarky, tried to rephrase without sucess :) It seems like an incomplete conflation of John Drewery's stuff on the filter function of the human vision system with the high framerate/dynamic resolution bits.

Note: Pinch of salt required. Nothing I say is authoritative and may be total crap. You may point and laugh if so. I have had rather too much strong coffee and feel rather strange- I may have forgotten how to brain, though I don't think so.
 
Last edited:
Also, Pascal will not be 10x faster in gaming areas. In optimal places that use the new uarch to it's maximum potential (most likely compute) it will reach 10x faster, on another $1k+ card that is more unlocked in compute than the gaming cards are.
 
Back
Top Bottom