Fraunhofer IDMT designs the »ear of the car« as an important component of autonomous driving

News /

Modern cars already feature a range of sophisticated systems such as remote-controlled parking, automatic lane-departure warning and drowsiness recognition. In the future, self-driving cars will also have auditory capabilities. Researchers at the Fraunhofer Institute for Digital Media Technology IDMT in Oldenburg, Germany, have now developed a prototype system capable of recognizing external noises such as sirens.

Modern cars are equipped with a host of advanced driver-assistance systems designed to reduce the burden behind the wheel. Features such as automatic parking and blind-spot monitoring employ cameras and radar and lidar technology to detect obstacles in the immediate vicinity of the vehicle. In other words, they provide vehicles with a rudimentary sense of sight. Cars have yet to be endowed with sense of hearing. In the future, however, systems that can capture and identify external noises are set to play a key role – along with smart radar and camera sensors – in putting self-driving cars on the road. Researchers at Fraunhofer IDMT in Oldenburg are now developing AI-based systems that can recognize individual acoustic events. These will give vehicles auditory capability.

 

»Despite the huge potential of such applications, no autonomous vehicle has yet been equipped with a system capable of perceiving external noises,« says Danilo Hollosi, head of the Acoustic Event Recognition group at Fraunhofer IDMT in Oldenburg. »Such systems would be able to immediately recognize the siren of an approaching emergency vehicle, for example, so that the autonomous vehicle would then know to move over to one side of the highway and form an access lane for the rescue services.« There are numerous other scenarios in which an acoustic early-warning system can play a vital role – when an autonomous vehicle is turning into a pedestrian area or residential road where children are playing, for example, or for recognizing defects or dangerous situations such as a nail in a tire. In addition, such systems could also be used to monitor the condition of the vehicle or even double as an emergency telephone equipped with voice-recognition technology.

 

Noise analysis with AI-based algorithms

Developing a vehicle with auditory capability poses a number of challenges. Here, however, Fraunhofer IDMT can call on specific project experience in the field of automotive engineering as well as a wealth of interdisciplinary expertise. Key areas of investigation include signal capture on the basis of optimal sensor positioning as well as signal preprocessing, signal enhancement and the suppression of background noise. The system is first trained to recognize the acoustic signature of each relevant sound event. This is done by machine-learning methods that use acoustic libraries compiled by Fraunhofer IDMT. In addition, Fraunhofer IDMT has written its own beamforming algorithms. These enable the system to dynamically locate moving sound sources such as the siren on an approaching emergency vehicle. The result is an intelligent sensor platform that is able to recognize specific sounds. Fraunhofer has also written its own AI-based algorithms. These are used to distinguish the specific noise that the system is designed to identify from other, background noises. »We use machine learning,« Hollosi explains. »And to train the algorithms, we use a whole range of archived noises.« Fraunhofer and partners from industry have already created initial prototypes. These should be reaching market maturity by the middle of the coming decade.

 

The acoustic sensor system comprises microphones, a control unit and software. The microphones, installed in a protective casing, are mounted on the outside of the vehicle, where they capture airborne noise. Sensors transmit these audio data to a special control unit that then converts them into the relevant metadata. In many areas of use – such as acoustic monitoring in production surroundings, in security applications, in the care industry and in consumer products – the raw audio data are directly converted to metadata by smart sensors. That means that data is predominantly processed directly on the sensors. Therefore the solutions meets highest security standards also when used in facilities that require special protection.

 

Hearing, speech and audio technology HSA at the Fraunhofer Institute for Digital Media Technology IDMT in Oldenburg

The objective of the Hearing, Speech and Audio Technology Division (HSA) of the Fraunhofer Institute for Digital Media Technology IDMT is to transpose scientific findings related to hearing perception and human-technology interaction into technological applications. Its applied research priorities are the enhancement of sound quality and speech intelligibility, personalized audio reproduction and acoustic speech recognition and event detection with the help of artificial intelligence. A further focus is the use of mobile neurotechnologies, which facilitate the recording of brain activity and utilization of the resulting data outside the laboratory too. Application fields include consumer electronics, transport, the automotive sector, industrial production, security, telecommunications and healthcare. Through scientific partnerships, Fraunhofer IDMT-HSA has close links with the Carl von Ossietzky University of Oldenburg, Jade University and other institutions engaged in hearing research in Oldenburg. Fraunhofer IDMT-HSA is a partner in the »Hearing4all« Cluster of Excellence.

The Division Hearing, Speech and Audio Technology HSA is funded in the program »Vorab« by the Lower Saxony Ministry of Science and Culture (MWK) and the Volkswagen Foundation for its further development.

Last modified: