The CAP Renderer plugin can produce stable audio images in all directions using the Compensated Amplitude Panning (CAP) algorithm [1][2]. This uses 6 degree-of-freedom headtracking and can operate with 2 or more loudspeakers, removing the need for several surrounding loudspeakers to generate an immersive experience. The computational cost of CAP is low, less than VBAP. CAP covers frequencies to ~1000 Hz. Higher frequencies are covered by an adaptive VBIP panner built into the plugin. The signal routing in the plugin is shown below for a single source (LX and HX are cross over filters) : 

The plugin is used with instances of the VISR Object Panner plugin to generate input object streams.
CAP plugin window open in the REAPER examples session.

Ideally the tracking system should be non-invasive, as well as fast and accurate. Since these are currently tough criteria to meet, for development we use an HTC Vive tracker systemPython scripts are provided to integrate with this. A REAPER session with several examples is also provided 


  • 1. Produce 3D objects using only 2 loudspeakers, that are stable to head movement and rotation. 
  • 2. Reproduce multichannel formats such as 5.1 / 7.1 
  • 3. Produce 6-degree-of-freedom (6dof) augmented audio reality scenes, in which the listener can move around objects that are positioned accurately in space, for example completely around an image. 
  • 4. Create an image in front of you with 2 loudspeakers behind you. 
  • 5. Create an image directly overhead using 2 loudspeakers in front. 
  • 6. Use 2 loudspeakers with wide separation, up to 180 degrees. 
  • 7. Integrated with modified high frequency VBIP panning, which provides full bandwidth 3D in the central position between 2 loudspeakers. 
  • 8. Available for Windows and MacOS 
The CAP plugin in use in the REAPER example session, with stereo reproduction, and HTC Vive tracker for headtracking.

The VISR production suite package installation and source distribution include a detailed user guide for the CAP renderer in the folder resources/CAP/doc, including setup instructions and a guide to the REAPER session. 


[1] Menzies, D., Simon Galvez, M. F., and Fazi, F. M. “A Low Frequency Panning Method with Compensation for Head Rotation”. In: IEEE Trans. Audio, Speech, Language Processing 26.2 (Feb. 2018). 

[2] Menzies, D. and Fazi, F. M. “Multichannel Compensated Amplitude Panning, An Adaptive Object-Based Reproduction Method”. In: Jounal of the Audio Engineering Society (2019). 


S3A is funded by the Engineering and Physical Sciences Research Council (EPSRC).
Programme Grant Scheme – Grant Ref: EP/L000539/1
© Copyright 2020 S3A