Noise Reduction in Hearing Aids

Current Challenges

Hearing in noise continues to be one of the biggest challenges for hearing aid users. Hearing aids predominantly use single-microphone noise reduction, which estimates the presence or absence of noise from a single input signal and reduces hearing aid gain accordingly, without affecting speech if present (Brons, 2013). It uses as basis the environmental classification of input sound into speech, noise, or speech in noise to estimate the actual signal-to-noise ratio (SNR) in each channel. Then it adjusts the gain based on estimated SNR, and this is manifest as a trade-off between noise reduction and speech quality to the listener.

The goal of noise reduction is to reduce unwanted noise and retain speech that is of interest to the listener. Determination of what counts as unwanted noise and speech of interest to the listener at any given moment of time is challenging. This is because human attention is limited in capacity. Furthermore, listening environments that are noisy often involve many people talking (multi-talker speech), either taking turns or simultaneously, where the talker the listener is attending to (i.e., of interest to the listener) changes with time. Speaking in a noisy environment makes people speak louder and in a higher pitch, in an attempt to make themselves heard. The instantaneous SNR also changes with time, and often sounds appear, fade out, or disappear from the scene or soundscape. This makes it difficult to simulate a realistic environment when testing devices. To simplify the setup that is used for testing several assumptions are routinely made. This includes the assumption that the source (talker of interest, labeled the ‘target’) is in front of the listener (or in the frontal hemifield), that the target talker does not change, the SNR does not change, the source is static, and that the noise environment remains stable. To make it easier to separate the effect of the algorithm of interest measurements are typically made with other algorithms turned off.

These test scenarios and device settings are far from ideal and not how hearing aid users actually experience hearing in noise with their devices. For instance, it has been shown that compression may interact with noise reduction, so should be considered in the evaluation of noise reduction algorithms (Brons et al., 2015). It has been shown that user settings for listening in noise might be different when considering sound quality versus speech intelligibility, with there being a trade-off between noise reduction and speech distortion. In other words, settings that might be preferable in terms of sound quality may be sub-optimal in terms of speech understanding. Furthermore, hearing impaired users may have a different understanding of the term “speech distortion” than those with normal hearing (Huber et al., 2018). In fact it is known that single-microphone noise reduction does not improve intelligibility for speech in noise, but may provide benefit for listening effort and comfort (Brons , 2013). Thus, users prefer noise reduction measures in hearing aids to be on even though it has not been shown to increase speech intelligibility (Luts et al., 2010, Magnusson et al., 2013, Desjardins and Doherty, 2014, Wu et al., 2019).

A machine learning approach

Recently, researchers have been exploring the use of machine learning to improve noise reduction performance in hearing aids. Deep neural networks are employed in a commercially available hearing aid, Oticon More (Andersen et al., 2021). However not much is known about the actual architecture of the network employed, or its implementation for proprietary reasons. Recently, Healy (2023) reported improvement in speech intelligibility for both normal hearing and hearing impaired listeners for a novel deep-learning based noise reduction algorithm. They used reported intelligibility improvements of 46 to 58% for hearing impaired listeners, where intelligibility generalized to novel talkers.

A three depth level neural network, Loxaxs, CC0, via Wikimedia Commons

A comparison between current hearing aids and DNN-based enhancement approaches have shown an advantage for the latter. Specifically, objective metrics showed that hearing aids worsened performance in a “bypass” condition (with all hearing aid algorithms deactivated except feedback cancellation and linear amplification of 20 dB) due to the difficulty estimating the direction of arrival of target speech. However, DNNs (processed offline) were unaffected by the nonstationarity of noise and competing talkers and showed an increase in objective metrics of intelligibility and speech separation quality (MSTOI and SISDR), outperforming current hearing aids (Gusó et al., 2023).

What does the future hold?

The roadblock to implementing DNN networks on hearing devices is computational complexity. The size of networks required to achieve considerable noise reduction is large, and is out of reach for current hearing aids due to their small size. Consequently much of the processing is moved to high-powered devices such as the phone to improve efficiency. However, in the future, with work on DNN model compression and more efficient algorithms we might see DNN processing being used more on hearing devices (Diehl et al., 2023; Healy et al., 2023). DNN performance may be further improved by combining it with the traditional noise reduction approach of beamforming and using the user’s preference to guide a more personalized approach to noise reduction (Diehl et al., 2023). Work toward this end is ongoing (cf. Clarity Challenge).

Work is also ongoing to use ecologically valid environments for device as well as listener testing with the use of augmented and virtual reality (Keidser et al., 2020, Korzepa et al., 2018, Mehra et al., 2020). This includes research on trying to decrypt attention (attention decoding), the use of wearable sensor technology to obtain indices of listener attention, effort, fatigue, stress, and indeed, cognizance (cf. Fuglsang et al., 2020, Geirnaert et al., 2021), and open hearing aid platforms (e.g., openMHA) that enable devices to be developed and evaluated in real time with low processing delays (Herzke et al., 2017, Kayser et al., 2022).

References

  1. Brons, I. (2013). Perceptual evaluation of noise reduction in hearing aids. Universiteit van Amsterdam [Host].
  2. Brons, I., Houben, R. and Dreschler, W.A. (2015). Acoustical and Perceptual Comparison of Noise Reduction and Compression in Hearing Aids. Journal of Speech, Language, and Hearing Research 58, no. 4: 1363–76. https://doi.org/10.1044/2015_JSLHR-H-14-0347.
  3. Huber, R., Bisitz, T., Gerkmann, T., Kiessling, J., Meister, H., & Kollmeier, B. (2018). Comparison of single-microphone noise reduction schemes: Can hearing impaired listeners tell the difference? International Journal of Audiology, 57(sup3), S55–S61. https://doi.org/10.1080/14992027.2017.1279758
  4. Luts, H., Eneman, K., Wouters, J., Schulte, M., Vormann, M., Buechler, M., Dillier, N., Houben, R., Dreschler, W. A., Froehlich, M., Puder, H., Grimm, G., Hohmann, V., Leijon, A., Lombard, A., Mauler, D., & Spriet, A. (2010). Multicenter evaluation of signal enhancement algorithms for hearing aids. The Journal of the Acoustical Society of America, 127(3), 1491–1505. https://doi.org/10.1121/1.3299168
  5. Magnusson, L., Claesson, A., Persson, M., & Tengstrand, T. (2013). Speech recognition in noise using bilateral open-fit hearing aids: The limited benefit of directional microphones and noise reduction. International Journal of Audiology, 52, 29–36.
  6. Desjardins, J., L., & Doherty, K., A. (2014). The Effect of Hearing Aid Noise Reduction on Listening Effort in Hearing-Impaired Adults. Ear and Hearing, 35, 600–610.
  7. Wu, Y.-H., Stangl, E., Chipara, O., Hasan, S. S., DeVries, S., & Oleson, J. (2019). Efficacy and Effectiveness of Advanced Hearing Aid Directional and Noise Reduction Technologies for Older Adults With Mild to Moderate Hearing Loss. Ear and Hearing, 40(4), 805–822. https://doi.org/10.1097/AUD.0000000000000672
  8. Andersen, A. H., Santurette, S., Pedersen, M. S., Alickovic, E., Fiedler, L., Jensen, J., & Behrens, T. (2021). Creating Clarity in Noisy Environments by Using Deep Learning in Hearing Aids. Seminars in Hearing, 42(03), 260–281. https://doi.org/10.1055/s-0041-1735134
  9. Healy, E. W., Johnson, E. M., Pandey, A., & Wang, D. (2023). Progress made in the efficacy and viability of deep-learning-based noise reduction. The Journal of the Acoustical Society of America, 153(5), 2751-2751.
  10. Gusó, E., Luberadzka, J., Baig, M., Saraç, U. S., & Serra, X. (2023). An objective evaluation of Hearing Aids and DNN-based speech enhancement in complex acoustic scenes. arXiv preprint arXiv:2307.12888.
  11. Diehl, P. U., Singer, Y., Zilly, H., Schönfeld, U., Meyer-Rachner, P., Berry, M., … & Hofmann, V. M. (2023). Restoring speech intelligibility for hearing aid users with deep learning. Scientific Reports, 13(1), 2719.
  12. Keidser, G., Naylor, G., Brungart, D. S., Caduff, A., Campos, J., Carlile, S., Carpenter, M. G., Grimm, G., Hohmann, V., Holube, I., Launer, S., Lunner, T., Mehra, R., Rapport, F., Slaney, M., & Smeds, K. (2020). The Quest for Ecological Validity in Hearing Science: What It Is, Why It Matters, and How to Advance It. Ear and Hearing, 41, 5S. https://doi.org/10.1097/AUD.0000000000000944
  13. Korzepa, M. J., Johansen, B., Petersen, M. K., Larsen, J., Larsen, J. E., & Pontoppidan, N. H. (n.d.). Learning preferences and soundscapes for augmented hearing. 7.
  14. Mehra, R., Brimijoin, O., Robinson, P., & Lunner, T. (2020). Potential of Augmented Reality Platforms to Improve Individual Hearing Aids and to Support More Ecologically Valid Research. Ear and Hearing, 41, 140S. https://doi.org/10.1097/AUD.0000000000000961
  15. Fuglsang, S. A., Märcher-Rørsted, J., Dau, T., & Hjortkjær, J. (2020). Effects of Sensorineural Hearing Loss on Cortical Synchronization to Competing Speech during Selective Attention. The Journal of Neuroscience, 40(12), 2562–2572. https://doi.org/10.1523/JNEUROSCI.1936-19.2020
  16. Geirnaert, S., Vandecappelle, S., Alickovic, E., de Cheveigné, A., Lalor, E., Meyer, B. T., Miran, S., Francart, T., & Bertrand, A. (2021). EEG-based Auditory Attention Decoding: Towards Neuro-Steered Hearing Devices. arXiv:2008.04569 [Eess]. http://arxiv.org/abs/2008.04569
  17. Herzke, T., Kayser, H., Loshaj, F., Grimm, G., & Hohmann, V. (2017, July). Open signal processing software platform for hearing aid research (openMHA). In Proceedings of the Linux Audio Conference (pp. 35-42).
  18. Kayser, H., Herzke, T., Maanen, P., Zimmermann, M., Grimm, G., & Hohmann, V. (2022). Open community platform for hearing aid algorithm research: open Master Hearing Aid (openMHA). SoftwareX17, 100953.

Copyright © 2023 Vidya Krull. All Rights Reserved.

About Vidya Krull
I am a scientist and an audiologist, curious about all things related to hearing. I am also a self-taught artist.

Leave a comment