The CAP Renderer plugin can produce stable audio images in all directions using the Compensated Amplitude Panning (CAP) algorithm . This uses 6 degree-of-freedom headtracking and can operate with 2 or more loudspeakers, removing the need for several surrounding loudspeakers to generate an immersive experience. The computational cost of CAP is low, less than VBAP. CAP covers frequencies to ~1000 Hz. Higher frequencies are covered by an adaptive VBIP panner built into the plugin. The signal routing in the plugin is shown below for a single source (LX and HX are cross over filters) :
Ideally the tracking system should be non-invasive, as well as fast and accurate. Since these are currently tough criteria to meet, for development we use an HTC Vive tracker system. Python scripts are provided to integrate with this. A REAPER session with several examples is also provided.
The VISR production suite package installation and source distribution include a detailed user guide for the CAP renderer in the folder resources/CAP/doc, including setup instructions and a guide to the REAPER session.
 Menzies, D., Simon Galvez, M. F., and Fazi, F. M. “A Low Frequency Panning Method with Compensation for Head Rotation”. In: IEEE Trans. Audio, Speech, Language Processing 26.2 (Feb. 2018).
 Menzies, D. and Fazi, F. M. “Multichannel Compensated Amplitude Panning, An Adaptive Object-Based Reproduction Method”. In: Jounal of the Audio Engineering Society (2019).