We cordially invite you to participate in the 5th International Conference on Spatial Audio (ICSA) offering an insight into the world of spatial audio with focus on Virtual, Augmented and Mixed Realities.
As a multi- and interdisciplinary forum the ICSA brings together developers, scientists, users, and content creators of and for spatial audio systems and services. The 5th ICSA is hosted by the Institute of Media Technology at Technische Universität Ilmenau and by the Association of German Sound Engineers (VDT) with support of the Fraunhofer Institute for Digital Media Technology IDMT.
Focuses of ICSA 2019
- Application of audio systems and content presentation services
- Creation of content for playback via audio systems and services
- Development and scientific investigation of technical systems and services for audio recording, processing and reproduction
- Media impact of content and spatial audio systems and services
Contributions of Fraunhofer IDMT
Using audio objects in live and entertainment applications
September, 27, 2019 | Start 14:30 | Duration 50 min. | Josua Hagedorn, Christoph Sladeczek
Multi-channel loudspeaker systems are used for the generation of spatial audio. Especially for live and entertainment applications an intuitive interaction approach is needed to allow realtime generation of spatial audio scenes. The object-based audio approach is the most promising concept, which is about to revolutionize live and entertainment sound reproduction. An audio object is characterized by an input signal (signal of a voice or an instrument) and corresponding metadata representing the properties of the object like position or gain. The generation of loudspeaker signals is conducted by an audio renderer, who creates the speaker signal interactively. This workshop gives insides into the general concept of object-based audio, its background developments and focuses on usage in live and entertainment applications. The workshop will be conducted using Fraunhofer IDMT's SpatialSound Wave technology, which is used in entertainment facilities worldwide.
Machine Learning Based Context-Sensitive Acoustic Transparent Headphone
September, 27, 2019 | Start 15:30 | Duration 70 min. | Hanna Lukashevich, Jakob Abesser, Patrick Kramer, Mario Seideneck, Josua Hagedorn, Christoph Sladeczek
The context-sensitive acoustic transparent headphone detects acoustic signals of interest and then switch the headphone acoustically transparent. In the future, for example, safety-relevant signals such as sirens or approaching vehicles should no longer be overheard by listening to loud music. This poster presents a demonstrator of a smart headphone that uses machine learning to detect acoustic events and controls the acoustic transparency on demand.
New potentials for portable spatial audio with MEMS based speakers
September 28, 2019 | Start 09:00 | Duration 20 min. | Daniel Beer, Tobias Brocks
There is a high demand for portable spatial sound on the market. Manufactures of headphones and hearing aids have to deal with miniaturization of electroacoustic transducer (micro speakers) size by keeping sound quality and energy efficiency (battery life time) at the same time. This is especially important for 3D-Headphones with multiple loudspeakers. Different techniques are known to downsizing transducers. A very successful method is given by the semiconductor industry. The use of the so-called MEMS technology (Micro-Electro-Mechanical-System) leads to great success in applications for microphones and accelerometers. This success has triggered a high interest in the potential of MEMS technology for speaker manufacturing. Based on patents, the initial approaches of MEMS loudspeakers will be presented. An outlook of the arising new potentials for portable spatial audio with MEMS based speakers is given.
Deep Neural Network Approaches for Selective Hearing based on Spatial Data Simulation
September 28, 2019 | Start 09:20 | Duration 20 min. | Simon Hestermann, Hanna Lukashevich, Christoph Sladeczek
Selective Hearing (SH) refers to the listener's attention to specific sound sources of interest in their auditory scene. Achieving SH through computational means involves detection, classification, separation, localization and enhancement of sound sources. Deep neural networks (DNNs) have been shown to perform these tasks in a robust and time-efficient manner. A promising application of SH are intelligent noise-cancelling headphones, where sound sources of interest, such as warning signals, sirens or speech, are extracted from a given auditory scene and conveyed to the user, whilst the rest of the auditory scene remains inaudible. For this purpose, existing DSP-based noise cancellation approaches need to be combined with machine learning techniques. In this context, we evaluate various DNN architectures for sound source detection and separation. In addition, we propose a data simulation approach for generating different sound environments for a virtual pair of headphones. The Fraunhofer SpatialSound Wave technology is used for the training data simulation and a realistic evaluation of the trained models. For the evaluation, a three-dimensional acoustic scene is simulated via the object based audio approach.
For more information, please visit the ICSA website.
We look forward to seeing you at the 5th ICSA in Ilmenau!