Acoustic Simulation for AI Validation and Training

Training and validation of AI systems with simulated data and scenarios

The research activities in the field of acoustic simulation for AI validation and training aim to develop a simulation environment in which acoustic processes can be mapped metrologically, physically and numerically. This environment allows the parameterisable generation of data sets for the development, training and validation of acoustic AI models.

Data-based modelling of acoustic processes

Data is an essential raw material for the technological development of intelligent algorithms, digital applications and innovative products of our time. Data not only help to describe our (surrounding) world, but also to predict changes or challenges. Data is therefore generated, collected and analysed in almost all areas of life and work.

In particular, the development, training and validation of AI models rely on data. These must be of suitable quality and describe the part of reality in which the models will later be used. By "suitable quality" we mean not only the knowledge and control of disturbing influences, but also the correct and detailed description (annotation) of the processes under consideration.

There are several aspects to modelling acoustic events or environments. An acoustic observation usually consists of one or more individual sound sources with different acoustic properties as well as intentional or unintentional background noise.  The radiated sound is then influenced on its way to the receiver by effects of sound propagation, e.g. level decrease with distance, diffraction at objects, refraction at air layers, absorption, transmission and reflection at walls. A sound receiver then converts the sound into an analysable signal. This can be, for example, a human being who perceives his environment with his or her auditory system. Other, technical receivers are electroacoustic sensors, e.g. microphones or microphone arrays and accelerometers, which are specified as part of a sensor system for a specific application.

Depending on the use case, the following therefore applies to the development, training and validation of acoustic AI models: The more detailed the aforementioned parameters are recorded, the better an acoustic data set can be used for the training and validation of a robust and powerful AI model.

Parameterizable generation of acoustic data

With the help of AI models, complex tasks can be digitised and thus made accessible to a wide range of users. In the field of acoustics, for example, production lines can be monitored, acoustic events detected and classified or acoustically optimised product designs realised. However, the underlying AI models require a lot of data about the respective acoustic process and must be tested for their robustness. Both requirements can be met by acoustically simulated, virtual data with much less effort than with real measurement data.

The challenge is to model the three acoustical aspects of sound source, sound propagation and sound receiver, to virtualise them and to make them accessible in a parameterisable way. The models should be able to be based on real measurement data as well as on physical laws or numerical simulations. In particular, the combination of the methods can lead to significant improvement, since  unknown influences, such as certain material parameters, can be transferred into a model through suitable measurements. 

Acoustic simulation environment

The solutions and technologies developed at Fraunhofer IDMT aim at developing a framework that enables the acoustic simulation of sound environments and thus the automated generation of systematically processed data sets for use in AI models. Depending on the application, the framework contains suitable models for sound sources, sound propagation and sound receivers as well as the necessary interfaces between them. User inputs for the parameterization of the models can be made via a graphical user interface or script-based, so that a comfortable access to the framework is guaranteed.

Through the integration of sophisticated microphone recording and signal processing technologies, real sound sources, special room acoustic properties or complete acoustic environments can also be measured and integrated into the modelling chain. It is also possible to make digital or virtual sound fields audible through multi-channel loudspeaker systems. In this way, it is possible to create real acoustic environments and make them tangible and measurable, e.g. for perception tests or technical investigations.

The individual components of the acoustic environment can therefore exist either purely virtually or as a digital twin of real acoustic processes.

Acoustic signal processing

Depending on the application, different methods are available to achieve the development goals:

  • Development of microphones and processes for recording real acoustic sound sources and environments
  • Numerical simulation of acoustic processes of digital products and scenes (e.g. BEM/FEM)
  • Simulation of sound propagation including room acoustic conditions
  • Development of simulation environments for user-controlled rendering of acoustic scenes and environments
  • Parameterizable generation of simulated acoustic data sets for the development of AI models
  • Auralisation of virtual acoustic sound sources and environments
  • Generation of controlled physical sound fields through multi-channel loudspeaker systems or headphones


Research project


AuWiS – Audiovisual recording and reproduction of traffic noise for the simulation of sound insulation measures


Research project


Acoustically Advanced Virtualization of Products and Production Processes


Research project


Making noise protection measures audible in urban living spaces


Research project


Virtual Street – Virtual Reality: Funk, Road, Vehicle and Driver


  • Measurement, physical and numerical recording of real and virtual acoustic processes
  • Modelling of acoustic events and environments
  • Generation of parameterisable acoustic data sets
  • Making synthesised sound fields audible through e.g. SSW technology



SpatialSound Wave for Professional Audio and Entertainment

Object-based audio production and reproduction for an authentic sound experience



Making virtual products audible in space




Equipped with state-of-the-art special rooms and laboratories we enable a a wide variety of acoustic measurements and investigations. Please feel free to contact us!

Publication Type
2023 Influence of Sensor Design on Bio-Inspired, Adaptive Acoustic Sensing
Khan, Ekram; Lenk, Claudia; Männchen, Andreas; Küller, Jan; Beer, Daniel; Gubbi, Vishal; Tzvetan, Ivanov; Ziegler, Martin
Conference Paper
2022 Webbasierte Auralisation von Lärmschutzmaßnahmen und Produktklang
Sladeczek, Christoph; Fiedler, Bernhard
Journal Article
2022 Luftreinigungsgeräte - akustische Anforderungen und Optimierungsmöglichkeiten
Beer, Daniel; Fritzsche, Paul; Fiedler, Bernhard; Rohlfing, Jens; Bay, Karlheinz; Troge, Jan; Millitzer, Jonathan; Tamm, Christoph
Conference Paper
2022 Web-based Auralization of Noise Protection Measures in Urban Living Spaces
Fiedler, Bernhard; Millitzer, Jonathan; Weigel, Christian; Mees, Valentin; Loos, Alexander; Lorenz, Wolfgang; Sladeczek, Christoph; Bös, Joachim
Conference Paper
2021 Simulating MEMS loudspeakers
Fritsch, Tobias; Küller, J.
2020 Sound Propagation in Microchannels
Küller, Jan; Zhykhar, Albert; Beer, Daniel
Conference Paper
2019 Hörbarmachung von Lärmschutzmaßnahmen
Fiedler, Bernhard; Sladeczek, Christoph
Conference Paper
2019 Modeling the perception of system errors in spherical microphone array auralizations
Nowak, J.; Fischer, G.
Journal Article
Diese Liste ist ein Auszug aus der Publikationsplattform Fraunhofer-Publica

This list has been generated from the publication platform Fraunhofer-Publica