Recording the BBC Philharmonic orchestra in 3D

I’ve always been excited by the opportunities for 3D audio reproduction of orchestral music. From the stalls, the sound is fully immersive. The orchestra itself is wide and deep, the choir and organ are high above the stage, and the carefully designed concert hall acoustics reinforce the conductor’s carefully balanced blend of all the sections. When S3A got the opportunity to join a BBC Radio 3 recording of the BBC Philharmonic orchestra at the Bridgewater Hall in Manchester, we leapt at the opportunity!

The main research question was this: how do you take a traditional recording of a symphony orchestra and produce it for 3D audio? Or at least, how do you minimally modify the traditional production workflow and come out with a 3D mix? Being audio researchers, we therefore took along some not-exactly-traditional microphones, just in case.

The stereo mix for the Radio 3 recording was built around the feeds from a main Decca tree, which was lowered from the ceiling and suspended centrally just above and behind the conductor. Four other omnidirectional microphones were also hung in this way: a widely-spaced pair towards either end of the stage, to capture a blend of the instruments to either side of the orchestra; and a further widely-spaced pair, further back and high into the hall to capture the ambience. Additional microphones were placed around the stage, positioned to capture each section individually, rather than individual instruments, except the piano which was captured with it’s own spaced stereo pair.

In addition to the microphones used directly for the Radio 3 mix, Tom Parnell from BBC R&D added a compact Schoeps ORTF-3D array, a few meters behind the main Decca tree. The ORTF-3D allows the surrounding and height reverberation to be recorded, and controlled in a 3D mix by balancing among the different capsules. The S3A researchers added two Soundfield microphones, recording 1st order Ambisonics. The first was positioned high above the main Decca tree, giving us a broad perspective on the whole orchestra and the room ambience, and the second was directly in front of the stage, very close to the conductor, giving us a close perspective with a spatially wide view of the orchestra. Finally, we placed an Eigenmike centrally in the 3rd row of the audience, at around the conductor’s head height. The Eigenmike can capture Ambisonic signals up to 3rd order using 32 microphones mounted on a sphere. Combined with 360 degree video, this kind of signal could directly be used for a virtual reality production, although producers usually prefer to retain more control over the mix before writing out to any given surround format (be it Ambisonics, loudspeaker channels, or binaural). These microphones could all feasibly be added to the recording workflow with minimal additional overheads.

As we are also working on object-based reverberation techniques, we were keen to capture the room impulse responses between a loudspeaker placed on stage and the various microphones. Room impulse responses describe how the sound propagates from the loudspeaker to the microphones, and our work seeks to encode this into a set of parameters that an object-based renderer could interpret to maximise the listening experience for whatever audio reproduction system you have set up at home. In fact, even with microphones set up to capture the ambience, it is common practice to add artificial reverb to this kind of recording when it is broadcast in stereo. However, it is difficult to produce a one-size-fits-all reverb when the end reproduction system is unknown, which makes production for 3D audio more difficult. In binaural reproduction, for instance, there is ‘space’ to create fully immersive reverberation, but if the same reverb were applied to stereo then the mix would likely be too reverberant. We will process the impulse responses to derive a set of parameters that aim to convey the concert hall’s reverb to the listener, no matter how they eventually consume the content.

In the next few weeks we’ll be listening to the stereo mix, extracting reverb parameters from the room impulse responses, mixing the closer microphones into the microphone array recordings, and experimenting with building some fully object-based clips. After all that, hopefully we’ll be in a better position to answer the questions above!

Any questions? Suggestions? Have experience to share? Drop us a comment below!

The Turning Forest – Wins the TVB Europe Award for Best Achievement in Sound – Oct 2016

S3A and BBC Research have won the TVB Europe Award for Best Achievement in Sound last night for ’The Turning Forest’ VR sound experience using the spatial audio Radio Drama produced in S3A integrated into an immersive audio-visual experience by the BBC. The award was made at the European TV Industry awards last night, with S3A winning against entries from Britain’s Got Talent, Sky’s the Five and BBC TV Programme coverage.

http://www.tvbeurope.com/tvbawards-2016-winners-announced/
http://vrtov.com/projects/turning-forest/

The Turning Forest

‘The Turning Forest’ was premiered at the Tribeca Film Festival in May and has been touring international film festivals for the last 6 months with very positive reviews. The VR experience is also being released as a VR plugin download via Apple and Google stores.

This is a great early impact for S3A from research in the first two years of the programme.

 

Take me to church

Link

When visisonics kindly lent us their spherical microphone array, we wanted to capture some really interesting acoustic spaces. So, we loaded up the car and headed up the road to Emmanuel church.

In fact, Emmanuel has two rooms that acousticians might refer to as churches. The main church is a large open space seating 400 people, with a large balcony towards the rear. Then, the original church building, a typical turn of the century single nave church seating 100 people, stands just next door. Putting a trolley to good use, we were able to record in both rooms in one afternoon!

The visisonics array has 64 microphones spaced evenly around a sphere, and also records HD video from 5 cameras. Not wanting to miss an opportunity, we also brought along our 48 channel dual circular open array of omnidirectional microphones, together with a sound field microphone. In total we recorded impulse responses for 116 microphone channels, measuring 9 loudspeaker positions in each room.

So, what will we do with all the data? Well, it can support our spatial audio research in a number of ways. We can analyse the impulse responses to learn about the room geometry, predict the positions of the walls, and derive parameters for parametric reverb techniques. We could attempt to separate the source sound from the reverb it creates, to learn about making and transmitting recordings in large spaces. We could even combine the microphone recordings and the video to predict the room response at positions we didn’t measure! All of these things will help us make recordings that can give a listener a sense of being in the recorded room: we’ll take you to church!

The data will eventually be freely accessible. So, what will you do with it? Where should we record next time we load up the car? How many channels is too many?! Jot your thoughts in the comments below!

Follow Philip on twitter: @drpcoleman

S3A at the BBC Sound Now and Next, 19 & 20 May 2015 London

This is an exciting week for S3A. We will be taking part in the BBC Sound Now and Next event being held in London  (19 & 20 May at the Radio Theatre, BBC Broadcasting House.

S3A will be represented at the Technology Fair with Demos (see list below) and the Radio Drama (James Woodcock) will be showcased in the BBC Demo Room.

  1. Towards perceptual metering for an object-based audio system*                   [Dr Jon Francombe,  University of Surrey and Yan Tang, University of Salford]
  2. 3D Head tracking for Spatial Audio and Audio-Visual Speaker tracking         [Dr Teo de Campos, University of Surrey and Dr Marcos Simon Galvez, University of Southampton]
  3. Headphone simulation of 3D spatial audio systems in different listening environments”. [Dr Rick Hughes, University of Salford  and Chris Pike, BBC]
Here are copies of the posters that will be presented (including a video to accompany demo 1) https://drive.google.com/file/d/0B2Z62ocOkHfrT05PSzJhcGsxN0k/view?usp=sharing.

 

 

 

 

 

 

 

 

 

S3A First Year activities

As part of its first year activities, the research team was able to showcase the first iteration of a running baseline system. This is the foundation for further research and development into an immersive audio production and reproduction system. It includes all the required components to create 3D audio content and reproduce it in a living room environment. This intrinsic part of the project has been coordinated by Phil Coleman at Surrey.

The S3A  has also commissioned 3 short radio drama scenes. The objective of this exercise is 1) To generate high quality material that can be used to demonstrate the benefits of 3D audio and  2) To look at how production of fully 3D content differs from the production of standard stereo content in terms of the workflow and required production tools. Last month, the team led by James Woodcock, Salford produced the scenes for a short radio drama to evaluate the baseline system and create high quality test material which we can use as a reference point going forward. The recording of the radio drama scenes involved a couple of days in the  BBC radio drama studio  in MediaCity UK as well as on location recordings in the woods. Here is a little video produced by Phil Coleman from University of Surrey which shows the setup and an excerpt of one of the recording sessions in the studio:

S3A launch – April 2014

The S3A: Future Spatial Audio for Immersive Listener Experience at Home project launched on 1 April 2014.

This major new five-year research project brings together internationally leading experts in immersive audio, visual processing, and UK industry. The project is led by the University of Surrey and its a collaboration with the Universities of  Southampton and Salford and the BBC R&D.

S3A’s goal is to enable listeners to experience the sense of “being there” at a live event, such as a concert or football match, from the comfort of their living room through delivery of immersive 3D sound to the home using object-based content delivery.

The partnership aims to unlock the creative potential of 3D sound to provide immersive experiences to the general public at home or on the move. S3A will pioneer a radical new listener centred approach to 3D sound production that can dynamically adapt to the listeners’ environment and location to create a sense of immersion. Current 3D sound systems rely upon fixed loudspeaker arrangements and acoustically treated rooms that are not practical for home use. S3A will change the way audio is produced and delivered to enable practical high-quality 3D sound reproduction based on listener perception.

Funding
Engineering and Physical Sciences Research Council (EPSRC)
Programme Grant Scheme – Grant Ref: EP/L000539/1
Duration:  2013-18; Grant Award: £5.4M; Industry Support: £0.6M