During the s3a project, the need to have a self-contained production environment for object-based audio came up. As a consequence, the VISR Production Suite has been developed during the past two years, becoming the first open-source set of DAW plugins for producing and reproducing object-based audio. More than that, the DAW plugins can take advantage of head tracking systems to perform head orientation compensation on the audio scene, which makes them suitable for VR. The DAW plugins allow audio content producers and audio researchers to exploit the new object-based paradigm and technology to produce and distribute audio, which allows one to target different audio playback systems and formats using a single scalable production.
During the S3A project, there was a need to have VISR and The VISR Production Suite available for multiple operating systems. As a consequence, a pipeline was constructed to support both future developers of the framework and versioned releases of VISR and the VISR Production Suite. This means that the framework and tools are portable, reaching a larger user-base without being restricted to one operating system. This also included updating packages and documentation to be available on the s3a website too.
This talk reports on the production of 3 object-based audio drama scenes, commissioned as part of the S3A project. 3D reproduction and an object-based workflow were considered and implemented from the initial script commissioning through to the final mix of the scenes. The scenes are available as Broadcast Wave Format files containing all objects as separate tracks and all metadata necessary to render the scenes as an XML chunk in the header conforming to the Audio Definition Model specification (Recommendation ITU-R BS.2076 ). It is hoped that these scenes will find use in perceptual experiments and in the testing of 3D audio systems.
This talk presents the adaptation of one of the scenes from the S3A object-based audio drama dataset into a virtual reality (VR) experience. Tools and workflows were developed to enable production in a digital audio workstation with real-time dynamic binaural sound rendering and visual monitoring of the scene on a head-mounted display. These tools enabled export of the scene using the Audio Definition Model (ADM), which was then loaded into the Unity game engine to create the VR experience with accompanying computer graphics. The piece was first developed for Oculus Rift and premiered at Tribeca film festival in New York. Subsequently it was adapted for mobile VR devices, using ambisonics, and is now freely and publicly available on Daydream and Gear VR devices.
Typically, we have considered the idealized environment of a user’s living room – private, peaceful and predictable. Another popular listening environment is a car, with all of the added complications of road and engine noise, multiple listening positions, non-standard loudspeaker locations, in a challenging acoustic space of obstacles and curved glass, with the driver engaged in other important tasks, such as not getting into an accident. That said, in an automated vehicle, all passengers would be free to enjoy a shared, immersive audio experience. In January 2019, our exploratory “hackweek” investigated how object-based methodologies could be used towards this aim.