How Speech in Noise technology is helping the hearing impaired navigate complex sound environments

Speech in noise (SIN) is a vital area of hearing technology to improve users’ understanding of speech in complex noise situations. In people without hearing loss the brain automatically makes a map to plot the sound sources around them, choosing which sounds to listen to and which to filter out. However, people with hearing loss experience a very flat sound picture which is lacking dimensions, making it hard for them to distinguish between sounds and focus on what they want hear.

The function of hearing technology is to help people find out what they want to listen to, and also to manage noise levels – not to eliminate background noise altogether as this would result in an unnatural sound quality experience, but giving a nuanced listening experience that is based on each individuals context.

BIHIMA spoke to three representatives from its member manufacturers about their approach to this important area of research and their expectations for the future.


Thomas Behrens, Director of Oticon’s Centre for Applied Audiology Research

Thomas Behrens, Oticon


“Speech understanding is a cognitive process, so the brain needs access to speech sounds that are as clear as possible in order for it to make sense of the sound in an effective way. This also means we need to support selective attention, the brains natural noise suppression system, which can provide up to 20 dB of suppression of the competing information. This should be compared with hearing aids typically offering up to 5 dB of noise suppression. The most recent insight into selective attention is that it works best when provided with the full sound picture. So hearing aids have to ensure that the sound coming out of the device is both as clear as possible (fast and effective noise reduction) and as audible as possible.

“We have also recently learned that most traditional feedback management systems cut up to 10 dB of gain in dynamic situations (objects or reflective surfaces close to the hearing aid) which can occur up to 50% of a typical day for many people. So it’s important that the hearing aid fitting is verified to provide the prescribed amplification, both in static as well as dynamic situations. Hearing aids should further be documented to provide speech in noise benefits in realistic listening environments, not only in terms of the direct improvement in speech understanding, but also in terms of the cognitive benefit – for instance, in reducing listening effort and improving recall.

“Oticon remains committed to delivering such improvements and we believe that improved speech in noise abilities remains the most important benefit for the user and one of the perpetual challenges in hearing aid design. We have ongoing research in Artificial Intelligence signal processing and deep learning algorithms, and have research prototypes that can make hearing aids even more beneficial in difficult listening scenarios. As soon as chip designs in hearing aids evolve to allow the use of such deep learning algorithms, we want to provide these new benefits to our users.”


Oliver Townend, Senior Audiology Expert at Widex

Oliver Townend, Widex


“The challenge of increasing speech intelligibility in noisy environments is classical and it remains the single most important need to solve for people with hearing loss. There are important aspects to consider when considering speech and noise in modern hearing aids. Firstly, the listening environment and the actual speech and noise levels must be identified and tracked efficiently so speech is always clearly detected and amplified, while noise is efficiently attenuated. A trademark of Widex is to apply speech enhancement and noise reduction, guided not only by the characteristics of the environment but also by the individual hearing loss. The underlying hypothesis is that the audibility of the speech signal is always key, and nothing is more important than that. Therefore, the Widex platform anchors the real time noise reduction processing to the in-situ thresholds measured during the initial fitting of the hearing aid.

“An even more important factor for Widex is that the sound must always be natural and of the highest quality. This is what we believe provide the best premise speech intelligibility and effortless listening. Therefore, the integrated signal processing on the Widex platform applies both slow acting, fast acting and ultra-fast acting algorithms in a balanced and controlled manner, so sound quality is intact while speech over noise is in focus.

“A last important aspect for best speech in noise processing is the individual preferences and intents of the user – with AI powered applications users can now personalize his/her listening experience in the moment to achieve more individualized focus on the speech and the sounds he wants.

“Another important insight from recent research is that users rarely find themselves in negative speech-to-noise ratios, so technologies must be more appropriately optimized to perform best in these more relevant and realistic listening conditions. This insight begs a renewed focus on what are the realistic listening needs of people with hearing loss and underlines not only that we need to innovate solutions but that we also need to innovate outcome measures to be more representative of real life.

“Through intelligent user engagement tools, we will be able to transfer better solutions from user to user who experience similar challenges in well-defined listening environments. We have data to show that the preferred sound setting can vary quite substantially between users, given the environment and the user’s momentary intent in that environment. The future holds more of these intelligent AI systems that manage to combine insights of the user, the situation and momentary listening intent for better and more efficient hearing solutions.”


Erik Høydal, Senior Audiology Expert at Signia

Erik Hoydal, Signia


“With Speech in noise, we believe it is important to understand the situation from the perspective of the individual user. By doing this, we need to respect the underlying principles of good communication. This, of course, implies understanding speech but it also implies that the listener feels confident, present, aware of the environment and other speakers and is thus able to engage in communication.

“Technically, this has resulted in us applying powerful signal processing tailored specifically for situations where the hearing loss is preventing the listener to hear the speech details. This means that the hearing aid must be able to analyze the situation very precisely in order to predict the real need in every situation. It needs to be able to answer questions such as what kind of noise surrounds the wearer, is he/she listening or speaking, is it a loud crowded place, or a quiet dinner in a cozy restaurant?

“One of our latest platforms analyses the particular situation with a network of multiple sensors: the data gathered results in a holistic view about the immediate situation the listener is immersed in. This diverse data ultimately leads to greater granularity in the understanding of the situation and therefore improved listening benefit.

“A significant component in speech in noise management is the ability to analyze and process own voice intelligently. The perception of own voice is a well know source of irritation for many hearing aid wearers. The Signia platform offers a dedicated processing scheme for own voice, this system operates independently of the other processes and thereby preserves and protects the perception of own voice without disturbance to the user.

“Also, the hearing aid industry has developed a new assessment tool called ecological momentary assessment. This has enabled us to get closer to the real-life experiences and performance of our users. It is now possible to get more granular information about the listening and hearing experiences of our users, their emotional state in a situation, the importance of the situation and the actual outcome in the moment. This understanding of what happens in real life, where speech is hard to hear and where noise is disturbing, represents a powerful tool to improve the hearing aid solutions of the future. In a not so distant future the solutions will no longer be designed for average people hearing in average situations, we will develop hearing aids that can deliver sound to the individual based on that particular person’s needs and wishes.”



BIHIMA represents the hearing instrument manufacturers of Britain and Ireland, working in partnership with other professional, trade, regulatory and consumer organisations within the health care and charitable sectors. We raise consumer awareness about the latest hearing technology, and aim to influence government and policy makers to improve the lives of people with hearing difficulties.


This article was featured in Audio Infos Magazine, in the September 2020 issue.