SAM - Cochlear Implant Stimulation based on Auditory Modeling


Global deaf population is roughly estimated to be 0.1% of the total population. There are various causes of deafness, but in a considerable part of the cases the inner ear, i. e., the cochlear structure is damaged. Nowadays, however, there is a way to bypass the peripheral auditory system and directly stimulate auditory nerve fibers by employing cochlear implants (CIs). CIs have been the target of intensive research for over 50 years by now.

Even though CIs are the most successful neural prostheses ever, hearing can only be partially restored by them. Patients achieve an average of about 80% in speech recognition tests under quiet conditions (without lip-reading) until the end of the second year after implantation [1], but most cochlear implant recipients remain unable to enjoy music or to distinguish among complex sounds, especially in noisy environments.

Interestingly, computational strategies of today’s modern CI systems are still based on simple algorithms, which can hardly mimic the complex functionality of the human auditory system. On the other hand, numerous biologically motivated models of cochlear processing (and of auditory structures beyond) have been developed during the last 20 years. These bio-motivated models have several advantages over common filterbanks: they have great spectrotemporal resolution, they often include adaptation mechanisms that are shown to have a positive effect on speech recognition [2], and finally, spectral delays [3] introduced by the traveling wave on the basilar membrane are also mimicked to some extent. Importance of the latter is shown e. g. in [4] and [5].



[1]     J. Rouger, S. Lagleyre, B. Fraysse, S. Deneve, O. Deguine and P. Barone, “Evidence that cochlear-implanted deaf patients are better multisensory integrators,” Proc. Natl. Acad. Sci. USA, vol. 104 (17), pp. 7295-7300, 2007.

[2]     M. Holmberg, D. Gelbart, and W. Hemmert, “Automatic Speech Recognition with an Adaptation Model Motivated by Auditory Processing,” IEEE Trans. Audio, Speech and Lang. Process., vol. 14 (1), pp. 43-49, 2006.

[3]     S. Greenberg, D. Poeppel, and T. Roberts, “A space-time theory of pitch and timbre based on cortical expansion of the cochlear traveling wave delay,” in Proc. 11th Int. Symp. on Hearing, Grantham, 1997.

[4]     T. Harczos, G. Szepannek, A. Kátai, and F. Klefenz, “An auditory model based vowel classification,” in Proc. IEEE Biomed. Circuits and Systems Conf., London, UK, pp. 69-72, 2006.

[5]     D. A. Taft, D. B. Grayden, and A. N. Burkitt, “Speech coding with traveling wave delays: Desynchronizing cochlear implant frequency bands with cochlea-like group delays,” Speech Communication, vol. 51 (11), pp. 1114-1123, 2009.

Project aims

Conventional speech processors for cochlear implants use mathematically based information-coding strategies. The Bio-inspired Computing group of the Fraunhofer Institute for Digital Media Technology IDMT has developed a new CI stimulation strategy called Stimulation based on Auditory Modeling (SAM). In the SAM approach the human auditory processing is modeled as a digital system to make speech coding more natural. The structure and function of the middle ear and cochlea (basilar membrane, inner and outer hair cells) are mimicked and physiologically parameterized, so that psychoacoustic phenomena are accounted for inherently. This approach leads to a digital representation of sounds that resembles natural firing patterns of auditory nerve fibers quite well.

The idea of utilizing auditory models in cochlear implant systems dates back to the early 2000s and has recently been praised in e. g. chapter 9.4 “Use of Auditory Models in Implant Design” in [1].



[1]     R. Meddis, R. R. Fay, E. A. Lopez-Poveda, and A. N. Popper, Eds, Computational Models of the Auditory System. Boston, MA: Springer Science+Business Media LLC, 2010.

Technical information

The current version of SAM is based on the ear model of Baumgarte [1] extended by the model of the mechano-electrical transduction of Sumner et al. [2]. With the default settings, SAM produces truly sequential, stochastic stimulation patterns. For a comparison of stimulation patterns generated by the SAM and ACE (Advanced Combination Encoder) [3] strategies, see the figure on the left.

An insight into the details of SAM was presented at the Bernstein R&D Workshop on Cochlear Implants (July 5-6, 2012, Garching, Germany). The corresponding poster can be downloaded here. More information on the inner workings of SAM can be found under DOI: 10.1109/TBCAS.2012.2219530.



[1]     F. Baumgarte, “A Physiological Ear Model for the Emulation of Masking,” ORL, vol. 61, pp. 294–304, 1999.

[2]     C. J. Sumner, E. A. Lopez-Poveda, L. P. O'Mard, and R. Meddis, “A revised model of the inner-hair cell and auditory-nerve complex,” J. Acoust. Soc. Am, vol. 111 (5), pp. 2178–2188, 2002.

[3]     A. E. Vandali, L. A. Whitford, K. L. Plant, and G. M. Clark, “Speech perception as a function of electrical stimulation rate: using the Nucleus 24 cochlear implant system,” Ear Hear, vol. 21 (6), pp. 608–624, 2000.

Tests and methods

During a pilot study, SAM and ACE stimuli had been presented to CI-users through 22-electrode Nucleus® Freedom™ Contour Advance™ implants. Driving currents, stimulation rates and loudness growth functions had been re-fitted with SAM based on the users’ settings with their everyday strategies.

A relatively large number of tests were performed within five sessions per subject. Each test session (for evaluating performance with SAM) started with hearing parts of an audio book for several minutes using SAM. Sessions took about 2 hours with enough breaks provided. All tests were based on pre-recorded sound data that has been processed by the SAM or ACE strategy and streamed digitally to the subject’s implant.

The followings were assessed:

Speech perception in noise (OLSA [1]): example;

Consonant discrimination:

Speech perception in reverberated environments:

  • office-like room with light reverberation and T60=1.0 s: example, IR,
  • hall with light reverberation and T60=1.5 s: example, IR,
  • train-station with much reverberation and T60=1.5 s: example, IR,
  • hallway with very dense reverberation and T60=1.5 s example, IR;

Pitch discrimination (3-AFC):

Music perception (subjective quality rating):

Subjective quality rating.



[1]     HörTech gGmbH, OLSA: Oldenburger Satztest (Oldenburg Sentence Test), online resource: (browsed on 2021-06-09).

 [2]     A. Nickisch, D. Heber and J. Burger-Gartner, Auditive Verarbeitungs- und Wahrnehmungsstörungen bei Schulkindern: Diagnostik und Therapie, Verlag Modernes Lernen VML, Dortmund, 2002.

 [3]       A. Vandali, C. Sucher, D. Tsang, C. McKay, J. Chew, and H. McDermott, “Pitch ranking ability of cochlear implant recipients: A comparison of sound-processing strategies,” J. Acoust. Soc. Am, no. 117 (5), pp. 3126-3138, 2005.

Preliminary results

The usefulness of SAM had been proven during 2009-2010 as a joint effort of Fraunhofer IDMT, Cochlear-Implant Rehabilitationszentrum Thüringen (CIR) and Cochlear Europe Ltd. Even without much time (in the range of minutes) to habituate to the new signal processing strategy, tested CI-users had been able to hear and understand with SAM. Subjects with poor speech understanding performance in noise (with ACE) achieved better scores in speech perception in noise and in consonant discrimination tests using the SAM strategy. Pitch discrimination tests (employing pure tones and sung vowels [1]) showed huge benefit with SAM. Bimodally (CI + hearing aid in the contralateral ear) fitted subjects rated their hearing experience with SAM as much more natural and of better quality than ACE.

Detailed results of the tests with CI users are planned to be published during 2013.

An overview of alternative evaluation methods and first outcomes were presented at the Bernstein R&D Workshop on Cochlear Implants (July 5-6, 2012, Garching, Germany). The corresponding poster can be downloaded here.



[1]     A. Vandali, C. Sucher, D. Tsang, C. McKay, J. Chew, and H. McDermott, “Pitch ranking ability of cochlear implant recipients: A comparison of sound-processing strategies,” J. Acoust. Soc. Am, no. 117 (5), pp. 3126-3138, 2005.


A technique to make CI electrode stimuli audible for normal hearing listeners had also been developed during the project [1]. The input to the auralization algorithm is the stimulation pattern sent to the electrodes, so that any sound processing strategy can easily be evaluated. Using this technique, we did some comparisons on SAM vs. ACE, which clearly showed that phase information (discarded by ACE) makes a huge difference, even though no parallel stimulation of the electrodes is allowed [2]. Examples:

Please note that auralization is only a rough estimate of what might be perceived by a CI user.



[1]     A. Chilian, “Entwicklung von Tools und Methoden zur Evaluierung von Signalverarbeitungsstrategien für Cochlea-Implantate,” B.Sc. Thesis (in German), Institute of Biomedical Engineering and Informatics, Ilmenau University of Technology, Ilmenau, Germany, 2010.

[2]     A. Chilian, E. Braun, and T. Harczos, “Acoustic simulation of cochlear implant hearing,” in Proc. 3rd Int. Symp. Audit. Audiol. Res. (ISAAR 2011): Speech perception and auditory disorders, Nyborg, Denmark: The Danavox Jubilee Foundation, pp. 425–432, 2012.