The Metadapter is an extensible software framework for metadata adaptation in the context of rendering. It forms the technical basis of many techniques developed in the S3A project, including 

  • 1. Intelligent downmixing of multichannel content 
  • 2. Manipulation of perceptual attributes, for example envelopment 
  • 3. Media Device Orchestration (MDO)
  • 4. Perceptual room correction 

It is described in the paper [1]. 

The central structure of the metadapter is a sequence (or a more complex graph) of metadata adaptation steps, called processorsBy arranging processors, users can create complex adaptation tasks based on a set of existing components. In addition, the metadapter is extensible, that is, users can implement their own processors. To this end, it is implemented in the Python programming language. This means that users can implement their own adaptation processors quickly and with little programming experience. 

In addition, metadata processors can interact with external applications and devices, for example using the OSC (Open Sound Control) protocol. This enables user interactivity, personalisation, or the integration of external sensor data. 

The metadapter is part of the VISR framework and is included in the VISR installation packages. It can be used in two principal ways: 

  • 1. As a standalone application that receives and sends object-based audio metadata via network. This is most suitable if the metadata adaptation is not tightly integrated with one of the VISR object-based audio renderers, and for developing and metadata adaptation processors. 
  • 2. As a component integrated in the VISR framework. This is best suited to created tightly integrated applications, for example object-based audio renderers that incorporate metadata adaptation. 


[1] Franck, Andreas; Francombe, Jon; Woodcock, James; Hughes, Richard; Coleman, Philip; Menzies, Dylan; Cox, Trevor J.; Jackson, Philip J.B.; Fazi, Filippo Maria, A System Architecture for Semantically Informed Rendering of Object-Based Audio”, Journal of the Audio Engineering Society, vol. 67, pp. 498-509, July 2019 DOI 10.17743/jaes.2019.0025  


S3A is funded by the Engineering and Physical Sciences Research Council (EPSRC).
Programme Grant Scheme – Grant Ref: EP/L000539/1
© Copyright 2020 S3A