15th Speech in Noise Workshop, 11-12 January 2024, Potsdam, Germany 15th Speech in Noise Workshop, 11-12 January 2024, Potsdam, Germany

P25Session 1 (Thursday 11 January 2024, 15:35-18:00)
An auditory loudness model with hearing loss

Lars Bramsløw
Eriksholm Research Centre, Oticon A/S

Auditory models have gained acceptance and wide use for modelling and understanding hearing and hearing loss. They can be used for qualitative analysis and interpretation, but certainly also for quantitative analysis and simulation of sound perception of any sound source. An interesting new application of auditory models with hearing loss are ‘closed-loop’ approaches, whereby deep learning is used to directly derive the appropriate hearing loss compensation by comparing a normal hearing and a hearing-impaired branch of the auditory model.

The ‘AUDMOD’ model presented here models masking patterns, specific loudness, and total loudness for any stimulus and hearing loss, as specified by a standard audiogram. It is a strictly perceptual / behavioural model, based on the Moore & Glasberg roex-filter model, combined with the Zwicker & Fastl loudness model. The auditory filter bandwidth depends on both signal level and hearing loss, both leading to increased upward spread of masking. The computational requirements are very low, and the simple model is differentiable, both making the model suitable for deep learning applications, including closed-loop hearing loss compensation. Furthermore, the model can be applied in a straightforward manner for analysis and interpretation of hearing aid functionality. In the future, AUDMOD will be added to the Auditory Modelling Toolbox, https://www.amtoolbox.org/.

The model output is compared to experimental data for classical psychoacoustics and to a branch of the Glasberg & Moore model. Furthermore, a few application examples are shown, related to hearing aid signal processing as well as speech perception, for both normal hearing and hearing loss.

Last modified 2024-01-16 10:49:05