Environmental Sound Analysis

AI-based analysis of complex acoustic scenes and sounds

Using cutting-edge AI technologies, we are exploring the untapped potential of environmental sounds for applications in the fields of bioacoustics, noise monitoring, logistics and traffic monitoring, as well as security surveillance at construction sites and public events.

News and upcoming events

 

Conference

EUSIPCO 2024

On August 27, 2024, we will present our current research results in the field of bioacoustic monitoring at the European Signal Processing Conference EUSIPCO.

 

Journal Article

Human and Machine Performance in Counting Sound Classes in Single-Channel Soundscapes

Der Artikel ist in der Dezember-Ausgabe Volume 71 Number 12 des Journal of the AES (JAES) erschienen.

 

Conference

Inter-Noise 2024

Our audio expert Jakob Abeßer iwill co-organize the technical session »Machine Learning for Acoustic Scene Understanding« and we will present two papers from the field of acoustic monitoring.

Capturing information from environmental sounds

Sounds and noises surround us everywhere in our daily lives – as disturbing noise, as the soothing rustle of leaves or as the warning sound of sirens on the street. Humans possess not only the ability to distinguish between important and unimportant sounds but also to derive crucial information about their surroundings through sound interpretation based on their experiences.

"Machine listening" is a subfield of artificial intelligence that aims to replicate this human capability by automatically capturing and interpreting information from environmental sounds. This involves combining signal processing techniques and machine learning and developing algorithms for the analysis, source separation, and classification of music, speech, and environmental sounds. Source separation allows for the decomposition of complex acoustic scenes into their components, i.e., individual sound sources, while classification identifies sounds and assigns them to predefined sound sources or classes.

 

The developed technologies and solutions find applications in various areas:

  • Bioacoustics: Identifying animal species, studying behavioral patterns, or monitoring environmental impacts based on acoustic characteristics
  • Noise monitoring: recording noise data, identifying noise sources and planning noise protection measures
  • Logistics and traffic monitoring: Counting and classifying vehicles, analyzing traffic flows to improve emergency response planning, and implement traffic management measures
  • Safety surveillance (construction sites, public events): Detecting hazardous situations, vandalism, or break-ins acoustically

Robust recognition, energy-efficient implementation

General challenges in the analysis of environmental sounds include robust recognition of individual sounds despite high acoustic variability within and between different sound classes. In simple terms, the algorithm must be able to recognize a Dachshund labrador and a Great Dane German shepherd as dogs based on their barking. The strong overlap of multiple static and moving sound sources in complex scenarios further complicates reliable recognition.

When deploying AI algorithms in acoustic sensors, various microphone characteristics and room acoustics effects such as reverberation and reflections can make classification challenging.

Our research also addresses the question of how compact AI models can be trained with minimal training data for deployment on resource-constrained hardware. This is necessary because many deployment locations often lack sufficient or consistent power supply, and long-term analyses may span several days or weeks. Therefore, the models must not be overly large and complex to function for real-time analysis on the devices.

Learning to understand sounds

Our aim is to use the technology for practically relevant issues such as the measurement and investigation of noise pollution, bio- and eco-acoustics as well as construction site and logistics monitoring.

The basic research in the areas of efficient AI models, explainable AI, training with little data and domain adaptation also has the potential to be used across domains in other audio research areas such as speech processing or music signal processing.

Additionally, we conduct research in the context of listening tests and citizen science applications involving participants to explore the subjective perception of noise and other perceptual sound attributes. The aim is to gain a better understanding of which sound sources in everyday situations have a particularly disruptive impact on our perception of noise (and, by extension, our health).

How we proceed

The following methods and procedures are used to analyze environmental noise:

  • Audio signal processing
  • Deep learning
  • Perception of sound signals

 

Research project

"StadtLärm" (CityNoise)

Development of a noise monitoring system to support urban noise protection tasks

 

Field test project

Open Innovation Lab

Noise monitoring field test project as part of the City of Gelsenkirchen's "Open Innovation Lab"

 

Research project

BioMonitor4CAP

Acoustic animal species recognition and classification for improved biodiversity monitoring in agriculture

 

Research project

Construction-sAIt

Multi-modal AI-driven technologies for automatic construction site monitoring

 

Research project

ISAD 2

Development of explainable and comprehensible deep-learning models to enable a better understanding of the structural and acoustic properties of sound sources (music or environmental sounds)

 

Research project

vera.ai

Sound event detection and acoustic scene recognition for the development of trustworthy AI solutions for detecting advanced disinformation techniques in the media sector

 

Research project

news-polygraph

Sound event detection and acoustic landmark detection for the development of a multi-modal, crowd-supported technology platform for disinformation analysis

 

Research project

NeuroSensEar

Sound event detection and acoustic scene recognition for bio-inspired acoustic sensor technology for highly efficient hearing aids

 

Research project

Sound Surv:AI:llance

Acoustic Burglary Monitoring