In mixed reality (MR) applications, digital audio objects are rendered via an acoustically transparent playback system to blend with the physical surroundings of the listener. This requires a binaural simulation process that perceptually matches the reverberation properties of the local environment, so that virtual sounds are not distinguishable from real sounds emitted around the listener. In this paper, we propose an acoustic scene programming model that allows pre-authoring the behaviours and trajectories of a set of sound sources in a MR audio experience, while deferring to rendering time the specification of the reverberation properties of the enclosing room.
展开▼