Virtual Reality: Nvidia’s VRWorks Unveiled | VR Life

Virtual Reality: Nvidia’s VRWorks Unveiled

During Nvidia’s special Editors Day event in Austin, Texas, the company made public an enhancement to its VRWorks APIs which now comprises of what it says is the first “Physically Based Acoustic Simulator Engine”, accelerated by Nvidia GPUs.

Asserting the high ground in virtual reality, Nvidia and AMD’s has been around for a while now, but devoting larger attention to VR visuals and latency.

At the event, Nvidia CEO Jen-Hsun Huang made it public that VRworks (formerly Gameworks VR), the company’s set of virtual reality focused rendering APIs, is to have a new, physically based audio engine with capacity to perform calculations required to project model sound communication with virtual spaces exclusively on the GPU.

Scroll down for the video

b1

Physically based spatial or 3D audio can be expressed as the process through which sounds produced within a virtual scene are affected by the path taken before reaching the player’s virtual ears. The results are echoes caused by the bouncing of sound off many physical surfaces.

In a recently released video demonstrating the new VR Audio engine, Nvidia affirms that they’re advancing towards audio modeling and rendering much like ray tracing. Ray tracing is a processor intensive but incredibly precise when used to render graphics, calculating the path from source to destination of individual rays of light in a scene. Likewise, though presumably computationally much more cheaply, Nvidia asserts to be tracing the path sound waves travel through a virtual scene, applying ‘physical’ attributes and dynamically rendered audio based on the resulting distortion.

Putting it simply, the process is about how sound bounces off stuff and make it sound real. In reality VR Audio uses Nvidia’s pre-existing ray-tracing engine OptiX to “simulate the movement, or propagation, of sound within an environment, changing the sound in real time based on the size, shape and material properties of your virtual world — just as you’d experience in real life.”

 

 

b2

The spatial audio and the use of HRTFs (Head Related Transfer Function) in virtual reality is by now is widespread in SDK options for both Oculus and SteamVR development, not to mention a range of 3rd party options such as Realspace audio and 3DCeption. One cannot with a degree of certainty say how computationally correct the physical modeling already is in any of those options, although the sound is audible.

Hence, while the promises of such great levels precision by Nvidia are certainly tempting (and judging by the video pretty believable), conceivably it’s the GPU offloading that sells the idea as a possible winner. The results can be heard in the embedded video at the top of this page.

Point to note, however, is that most of the VRWorks suite of APIs and technologies benefits are brought by the company’s GPU accelerated VR Audio will be restricted to those with Nvidia GPUs. Another thing is developers will have to target these APIs in code particularly, too. The reasons for doing so, however, may well be good enough for developers to do just that and any audio enthusiast, will not deny that the idea of such potential accuracy in VR sounds-capes is pretty appealing. Let’s see how it stacks up once it arrived.

Meanwhile, if you’re a developer with eagerness to have an early peek at VRWorks Audio, visit their sign up page.

See the video below

 

2,063 Comments

Login

Welcome! Login in to your account

Remember me Lost your password?

Don't have account. Register

Lost Password

Register