An object-based sound system will enable program audio to change according to the requirements of each individual viewer, such as viewers’ preferences and viewing environments. The ‘object’ refers to different individual sound sources that are used to make a program, such as commentaries, background sound, and so on. Figure 1 illustrates typical examples of what an object-based sound system can offer.
NHK STRL is researching an object-based sound system to apply it to next-generation terrestrial broadcasting and the future of media, Diverse Vision, that includes VR/AR services. Please have a look at the video below for the images of what a future broadcasting service may provide.
For next-generation terrestrial broadcasting
NHK STRL is considering applying some of the features of the object-based sound system into next-generation terrestrial broadcasting in the near future. In light of this, we have developed an object-based sound live production system that supports internationally standardized audio-related metadata and a highly efficient audio encoder/decoder complied with MPEG-H 3D Audio (3DA). Figure 2 shows the overall object-based sound system from production to home reproduction. The features of some of its subsystems are described below.
- S-ADM real-time transmitter
Supports the transmission method of S-ADM (a serial representation of the Audio Definition Model*1) synchronized with audio signals*2 via a conventional AES3-based digital audio interface*3, such as MADI.
- MPEG-H 3DA coding systems with S-ADM interface
Correspond to MPEG-H 3DA Low Complexity (LC) profile level 4*4. The encoder can receive S-ADM and convert it into MPEG-H 3DA metadata.
Loudspeaker layouts from stereo to 22.2 multichannel sound are supported.
Personalization including dialogue enhancement, multilingual services, audio description, user-requested playback of audio object, 3DoF-based fundamental viewing position changes, and so on.
NHK STRL is also developing a mixing console that can create and transmit S-ADM with audio signals by itself.
For VR/AR applications
VR/AR has an interactive or personalized nature where a viewer may move within a virtual space or manipulate virtual objects. The object-based sound system enables a viewer to hear sound from virtual objects naturally even when he/she moves around in a 3D space. To achieve this, we are developing an advanced renderer for the 6DoF*5 reproduction of spatial audio.