3D Vision Blog

A normal user's look into the world of 3D Stereo Technologies

3D Vision Blog header image 2

Nvidia Introducing VR Direct Technology with New GPU Launch

September 19th, 2014 · 4 Comments · 3D / AR / VR / HMD, General 3D News

nvidia-geforce-gtx-980

Today Nvidia has launched their new Maxwell high-end GPUs – GTX 980 and GTX 970 and while they offer a nice performance boost over the previous generations at a reduced power usage level making them a really attractive upgrades, thees products came with an interesting VR-related announcement as well. Apparently Nvidia is already working together with companies developing Virtual Reality products such as Oculus Rift in order to provide the users with a better experience. While we may need some more time before VR becomes more mainstream and reaches a really great level of experience it seems that things are really moving at a good pace already. It is also interesting to note that Oculus Rift and other VR headsets may actually turn out to be the saviors of the 3D Vision technology as well, but we’ll have to see about that.

vr-direct-slide

With VR Ditect Nvidia is trying to address multiple things related to the VR experience such as to lower the latency, improve quality and provide more content that will work well on your Oculus Rift or other VR headset without having to be specially made for them. With a VR headset the latency is much more important than when using a traditional display as any perceived delays can throw off the VR experience, potentially causing the user to get motion sickness.

The standard VR pipeline from when you move your head to when you actually see the response on your VR display is about 50 milliseconds. Nvidia’s goal is to reduce this latency as much as possible so gamers feel even more immersed. A large portion of this is the time it takes the GPU to render the scene as well as OS overhead (about 32 ms) and as a result of this effort, they have managed to cut 10ms of latency out of the standard VR pipeline. Using a new MFAA Anti-Aliasing filtering method rather than MSAA for better image quality, they have managed to reduce GPU render time by an additional 4ms, and even further with another technique they are also working on caled asynchronous warp. Rather than requiring the GPU to re-render each frame from scratch, with asynchronous warp the GPU takes the last scene rendered and updates it based on the latest head position info taken from the VR sensor; head tracking input is literally sampled moments before you see it. By warping the rendered image late in the pipeline to more closely match head position, discontinuities between head movement and action on screen are minimized, dramatically reducing latency even further. So they have managed to actually reduce latency from 50ms down to 25ms and that should dramatically improve the VR experience for the user.

Besides latency, another major obstacle that must be overcome to provide an immersive VR experience is performance; not only is a high frame rate needed, it’s also important that the frames are delivered to the user’s eye in a smooth fashion. While one GPU can be used to drive an Oculus Rift, enthusiasts will want two GPUs to ensure the best performance with maximum game settings enabled. That would be especially true if we go from a screen resolution higher than the currently used Full HD on the Oculus Rift DK2 or we have a separate 1080p screen for each eye. Traditionally Nidia SLI relies on alternate frame rendering (AFR) where each GPU renders alternating frames that are presented to the user. For VR scenarios however, Nvidia is implementing a new VR SLI profile where each GPU will render per display; the display responsible for the left eye will be handled by one GPU, while the second GPU will be responsible for the display on the right eye. This solution should provide lower latency and ultimately better performance for the user.

Display resolution is another critical feature for VR. With the displays in a VR headset resting extremely close to the user’s eyes, higher resolution can remarkably improve the VR experience. Dynamic Super Resolution (DSR) can be used to provide an improved resolution quality with today’s existing VR displays by rendering the game images at a higher resolution and then scaling them down to the native VR display resolution preserving more details and smoother image.

Another challenge VR must overcome to be more widely adopted is lack of content. Outside of a handful of tech demos, there aren’t that many applications that support VR headsets. To solve this issue Nvidia is leveraging their extensive experience with the 3D Vision technology to bring VR support to existing games that are already compatible with 3D Vision, and adapt them to work with VR. They plan to use GeForce Experience to optimize game settings and handle configuration automatically, and aside from converting the games to the output format required by the VR headset and providing stereoscopic 3D rendering they will also map mouse and keyboard commands to VR inputs like head movement, similar to what was done with the Gamepad Mapper on the Nvidia Shield Portable.

All of the above sounds very interesting and promising, but what is still missing is information on when all of these features under the VR Direct technology will be actually available to users to try them out – for example developers that already have their hands on the Oculus Rift DK2.

Tags: ·······


4 responses so far ↓

  • 1 eqzitara // Sep 22, 2014 at 05:59

    Well supposedly it will be “soon.” I can’t say if thats true though…
    “We’ll see VR Direct launched with the GeForce Experience basic build in the very near future. Stay tuned to our NVIDIA tag portal for more – soon!”
    http://www.slashgear.com/nvidia-vr-direct-oculus-optimized-18346717/

    ——————
    Anywho… if anyone who hasn’t seen petition about getting mod support for VR direct.. go here
    https://forums.geforce.com/default/topic/776585/3d-vision/petition-please-allow-mod-support-for-vr-direct-3dmigoto-helixmod-/

  • 2 eqzitara // Sep 22, 2014 at 06:03

    Wait a min….VR DSR.

    4K@75HZ downsampled…. That’s not going to happen for obvious reasons.
    Hopefully its semi-realistic and allows 1440P.

  • 3 Dugom // Sep 24, 2014 at 15:40

    DK1 use the Samsung Note 2 smartphone screen 720 x 1280 5.5″.
    DK2 use the Samsung Note 3 smartphone screen 1080 x 1920 5.7″.
    DK3 will use Samsung Note 4 smartphone screen 1440 x 2560 5.7″ like de Samsung VR Gear (Oculus made !):
    http://www.samsung.com/global/microsite/gearvr/gearvr_features.html

  • 4 CarlB // Sep 27, 2014 at 15:25

    3D Vision already uses parallel rendering for each eye instead of AFR, that’s why 3D Vision + SLI scales almost always perfectly and even in games with little to no SLI support (or just bad SLI performance), correct?

    That is what I’ve been noticing with my two cards in 3D.

Leave a Comment