The Vive discussion Thread

I think some of the new driver announcements next week will shed some light on how we can increase FPS on lower end rigs.
exciting times! I just wants fly around asteroids without the stutter (and thats with 2x980s!)
 
I think some of the new driver announcements next week will shed some light on how we can increase FPS on lower end rigs.
exciting times! I just wants fly around asteroids without the stutter (and thats with 2x980s!)
Wow, SLI 770's here on ultra jitter free.
 
I was surprised too... I tried turning off DSR and even set graphics to low but the effect was exactly the same. I'm guessing it's either network, server or optimisation related. Some things in this game still aren't VR friendly... Another good reason for consumer VR to hurry up! Frontier would be mad not to cash in on making this a AAA VR experience.
 
Maybe, big maybe, they use different hardware inside that doesn't need crazy expensive hardware to run? Just a guess but, I bet Valve and HTC have thought about that.
You must not know how GPUs work. Resolution has a huge impact on performance, as does needing to render every frame twice for stereoscopic displays.

This is roughly 4 times the hardware requirement of driving a single 1920x1080 display, and no amount of black magic can change that. There is nothing you can put "inside" other than a big GPU that can reduce the graphics requirements of that resolution.

You can't streamline that any further without upscaling, and that universally looks terrible. They may be able to upscale and blur the image slightly, but nobody's gonna go for that.

It looks to me like this is not intended as anything more than an excellent display setup for VR movies, with gaming being an option for folks with VERY high end rigs.
 
You must not know how GPUs work. Resolution has a huge impact on performance, as does needing to render every frame twice for stereoscopic displays.

This is roughly 4 times the hardware requirement of driving a single 1920x1080 display, and no amount of black magic can change that. There is nothing you can put "inside" other than a big GPU that can reduce the graphics requirements of that resolution.

You can't streamline that any further without upscaling, and that universally looks terrible. They may be able to upscale and blur the image slightly, but nobody's gonna go for that.

It looks to me like this is not intended as anything more than an excellent display setup for VR movies, with gaming being an option for folks with VERY high end rigs.
No need to be rude and assuming.

There are hardware solutions to negating the need to render two separate images, along with alternate lenses and aspect ratios that don't require "warping" the image in a post process procedure, thereby greatly alleviating the GPU power needed to render stereoscopic images like the OR uses. So, thanks but, your expertise is lacking.

Also, assuming companies wouldn't or have not considered the current high costs required to smoothly operate an OR and it's impact on potential sales is rather naive. Also, assuming that this HMD will use the same tech as OR, although it may, it a bit assuming and could well turn out to be incorrect.

Thanks for posting though :D
 
Last edited:
You must not know how GPUs work. Resolution has a huge impact on performance, as does needing to render every frame twice for stereoscopic displays.

This is roughly 4 times the hardware requirement of driving a single 1920x1080 display, and no amount of black magic can change that. There is nothing you can put "inside" other than a big GPU that can reduce the graphics requirements of that resolution.

You can't streamline that any further without upscaling, and that universally looks terrible. They may be able to upscale and blur the image slightly, but nobody's gonna go for that.

It looks to me like this is not intended as anything more than an excellent display setup for VR movies, with gaming being an option for folks with VERY high end rigs.
If half the rumours about NVIDIAS VR drivers and directx 12 are true ... Even lower end rigs could end up with potentially 100% more capability increase. The next few months may well be decisive.
 
If half the rumours about NVIDIAS VR drivers and directx 12 are true ... Even lower end rigs could end up with potentially 100% more capability increase. The next few months may well be decisive.
This is patently incorrect.

DX12 will not increase your total graphical performance. If you have a 980, and onboard video, you might see a 10% increase.

If you have SLI 980s and onboard video you might see a 5% increase.

If you happen to already have a pair of totally mismatched GPUs in your system for driving a bunch of extra displays, then and only then will you see a significant increase in performance due to your multiple cards.

It will not however turn a single 980 into 2 of them worth of performance. At best you will see higher efficiency leading to a small bump in performance when you have two cards in SLI but aren't adding a third mismatched one, due to the way it will handle the VRAM.

- - - - - Additional Content Posted / Auto Merge - - - - -

No need to be rude and assuming.

There are hardware solutions to negating the need to render two separate images, along with alternate lenses and aspect ratios that don't require "warping" the image in a post process procedure, thereby greatly alleviating the GPU power needed to render stereoscopic images like the OR uses. So, thanks but, your expertise is lacking.
There is no possible way to render two distinct viewpoints without rendering two distinct viewpoints. Stereoscopy requires two "cameras" in game. It has to render all of the geometry two times. You can't render it once and then just move the camera for the other side without re-rendering the second viewpoint.

The output can be a single frame, just like the DK2 is now, but you still have to render the left and right halves of the display separately.
 
Last edited:
This is patently incorrect.

DX12 will not increase your total graphical performance. If you have a 980, and onboard video, you might see a 10% increase.

If you have SLI 980s and onboard video you might see a 5% increase.

If you happen to already have a pair of totally mismatched GPUs in your system for driving a bunch of extra displays, then and only then will you see a significant increase in performance due to your multiple cards.

It will not however turn a single 980 into 2 of them worth of performance. At best you will see higher efficiency leading to a small bump in performance when you have two cards in SLI but aren't adding a third mismatched one, due to the way it will handle the VRAM.

- - - - - Additional Content Posted / Auto Merge - - - - -


There is no possible way to render two distinct viewpoints without rendering two distinct viewpoints. Stereoscopy requires two "cameras" in game. It has to render all of the geometry two times. You can't render it once and then just move the camera for the other side without re-rendering the second viewpoint.

The output can be a single frame, just like the DK2 is now, but you still have to render the left and right halves of the display separately.
Yes, yes there is. You're not aware of them but, thankfully, that doesn't mean they don't exist. You need to research this a lot more before you attack posters with incorrect information, Google is your friend.

The most fundamental basic, 19th century, STEREOSCOPIC, example I could find for you, the images are taken with a single camera, at a single "viewpoint" and magically, there's a stereoscopic image presented to the viewer. Not the best example of how it can be done with modern hardware but, I think a good place for you to start researching it. Holmes_stereoscope.jpg
 
Last edited:
Yes, yes there is. You're not aware of them but, thankfully, that doesn't mean they don't exist. You need to research this a lot more before you attack posters with incorrect information, Google is your friend.
Or you could just post them, and stop assuming this is a personal attack.


You still have to render both viewpoints. Stereoscopic display requires two viewpoints. Both of which require their own geometry to be rendered. You are the one refuting my claim, so the burden of proof lies with you.

Feel free to tell the class how one can render two viewpoints without rendering geometry twice.

I'm sure with a billion dollar budget OR would have been able to do a simple google search of their own and perhaps find such apparently easily available technology.
 
Yes, yes there is. You're not aware of them but, thankfully, that doesn't mean they don't exist. You need to research this a lot more before you attack posters with incorrect information, Google is your friend.
So why you don't tell us what is this magic trick ? Just post link .. ;)
 
So why you don't tell us what is this magic trick ? Just post link .. ;)
I mean, he clearly knows all about the subject. God forbid he actually prove it rather than just bein all "Nope. You're wrong. *drops mic*.
I think he is just some kid that want's to belive :D
See updated post. That's the most basic example I could find for you guys. No magic cameras or "viewpoints" needed to create stereoscopic images. The same stereoscopic technique can be used, post GPU, by hardware that duplicates the images and offsets them.
 
Yes, yes there is. You're not aware of them but, thankfully, that doesn't mean they don't exist. You need to research this a lot more before you attack posters with incorrect information, Google is your friend.

The most fundamental basic, 19th century, STEREOSCOPIC, example I could find for you, the images are taken with a single camera, at a single "viewpoint" and magically, there's a stereoscopic image presented to the viewer. Not the best example of how it can be done with modern hardware but, I think a good place for you to start researching it. View attachment 17766
Still wron: A stereoscope is a device for viewing a stereoscopic pair of separate images, depicting left-eye and right-eye views of the same scene, as a single three-dimensional image.

http://en.wikipedia.org/wiki/Stereoscope

You just dont get it right, sorry.

- - - - - Additional Content Posted / Auto Merge - - - - -

See updated post. That's the most basic example I could find for you guys. No magic cameras or "viewpoints" needed to create stereoscopic images. The same stereoscopic technique can be used, post GPU, by hardware that duplicates the images and offsets them.
It just wont work for 3d games, it have to render it two times and it takes cpu and gpu power :)
 
Bruh.

What you posted still requires two distinct images. A single camera could take two images using two lenses focused on each half of the film.

Likewise, a single camera could be used with a shifting shutter to take one, then the other - but the first is more likely as it is better to take both images at the same time. Either way, you have two distinct images.

So you are still incorrect.

Just because it's "one camera" doesn't mean its not two distinct viewpoints.

https://www.google.com/search?q=stereoscopic+camera&es_sm=122&tbm=isch&tbo=u&source=univ&sa=X&ei=z3XzVK-OFsWmyAT3qYKQBw&ved=0CFMQsAQ&biw=950&bih=931


You might note that literally every single one of them has two lenses, even if it uses one piece of film per stereoscopic image.


Kodak_stereo_camera.jpg
 
Last edited:
Bruh.

What you posted still requires two distinct images. A single camera could take two images using two lenses focused on each half of the film.

Likewise, a single camera could be used with a shifting shutter to take one, then the other - but the first is more likely as it is better to take both images at the same time. Either way, you have two distinct images.

So you are still incorrect.

Just because it's "one camera" doesn't mean its not two distinct viewpoints.
Bruh,

What your missing is the ability for standalone hardware to take a single rendered image, offset them, and display them both on a single screen, without the need for additional GPU processing to render two distinct images. Aspherical lenses also negate the need to "warp" a rendered image so it conforms to standard screen ratios.
 
It isn't possible to "offset" an image and get stereoscopic display due to parallax. What you are saying would not result in a 3D image, only a blurry one.


090330-ParallaxView.jpg

EDIT for better image... how do I get rid of the other attachment lol?
 

Attachments

Last edited:
It isn't possible to "offset" an image and get stereoscopic display due to parallax. What you are saying would not result in a 3D image, only a blurry one.
Wrong again, it's been done for over 150 years, creating "3D" images. Here's another quick example of different technology tackling the GPU overhead needed to produce VR images by using different aspect ratios and lenses (aspherical) thereby greatly reducing the GPU power needed. https://www.kickstarter.com/projects/805968217/antvr-kit-all-in-one-universal-virtual-reality-kit

Offset, doesn't mean the images are layered on top of each other, it means the images, viewed by a single eye, are presented at an offset position to compensate for the distance between your eyes.

Thanks for hijacking the thread, attacking posters (me) though. Have a great day :D

P.S. I'm not sure what "Bruh" means.
 
Last edited:
Wrong again, it's been done for over 150 years, creating "3D" images. Here's another quick example of different technology tackling the GPU overhead needed to produce VR images by using different aspect ratios and lenses (aspherical) thereby greatly reducing the GPU power needed. https://www.kickstarter.com/projects/805968217/antvr-kit-all-in-one-universal-virtual-reality-kit
You have a fundamental misunderstanding of what you are saying.

Taking a single 2D image, and offsetting it, would not result in a different viewpoint for the other eye. You would be seeing the exact same angles.


You have no understanding of parallax at all it would appear. 3D cannot happen without it.


I will make you a demonstration of what you are saying, please give me some time.
 
Last edited:
Top Bottom