Manufacturers from BIHIMA outline how Artificial Intelligence (AI) is shaping the latest hearing technology

Artificial Intelligence, also called machine learning (when a system has been taught by humans to think, learn and remember) has made its way into many areas of our life, from recommending which TV series to watch next, to finding the fastest route on a map. It is now also an important technology in the design of hearing instruments, although the industry is still just scratching the surface of what can be achieved. However, already thanks to AI, we are seeing leaps forward in areas such as signal processing and fall detection.

Oliver Townend, Senior Audiology Expert at Widex, explains how the use of AI has evolved in his company’s technology, starting with the desire to ensure the experience of wearers is as good in the real world as it is in clinic. “We knew that many hearing aid users were quite happy with the first fit of their hearing instruments during the first encounter with the HCP. But we also knew that some users experienced lack of comfort, intelligibility and sound quality once they were out in real life, hearing sounds in a new way for the first time, because listening to sounds in the clinic and in real life are two very different things. This can lead to a series of fine tuning visits with the hearing care professional, often with good results, but other times struggling to find a good solution.

“It was these instances of lack of success that inspired us to develop a consumer application based on AI, to empower the consumer to make small adjustment to the sound in the moment that did not sound quite right. This entailed development of an intuitive UI interface embedded in the hearing aid app to invite the user to engage and fine tune on their own.”

The current picture of AI

“The Widex portfolio is now fueled with AI,” Oliver continues. “We have used it to build what we call a preference sensor inside the hearing. The idea is to allow an ultrafast AI preference optimizer that can optimize and personalize the sound for the individual in the moment. In real life, what an individual wants to hear varies depending on the situation – you might want to focus on the conversation around you or lower the ambient noise level to be allowed to focus on a task at hand. In short, how, who, or what an individual wants to hear is defined as their listening intention. The intent is incredibly important, and cannot solely be deduced from the situation. The true challenge is uncovering the intent in a situation and then setup the hearing aids to fulfill the intent. This is where AI comes into play.”

Erik Harry Høydal, Senior Audiology Expert at Signia, also sees the development of AI as a journey of understanding on which the industry still has a long way to go. “Our first steps were “intelligent” volume controls learning from user-interactions back in 2006. The approach was quickly extended to level-dependent and frequency specific learning, and increased the precision in individual fittings. More recently, we were able to train our hearing aids to recognize the wearer’s voice, enabling the device to reduce gain only for the individual’s own voice.”

For Unitron, explains Leonard Cornelisse, Hearing Scientist in its Hearing System Engineering division, the focus has been on high-performance classification which requires machine learning and algorithms to facilitate it. “The largest impact of artificial intelligence in hearing care has been creating a more seamless experience for the wearer (through automation of programme changes) and improved sound performance (we’re able to recreate a more natural listening experience through intelligent signal processing, rather than just make everything louder).”

As well as acoustic classifier performance capabilities, Dr Dave Fabry, Chief Innovations Officer at Starkey, explains that AI has been important to Starkey in optimising fall detection and balance training, real-time translation and body/brain fitness. “Furthermore, we are working relentlessly to continue the evolution of our product as the world’s first hearing aid to use embedded sensors and AI to provide industry-leading sound quality and speech intelligibility, provide a gateway to health and wellness, and augment intelligence by providing a gateway to information.”

The challenges in design

Despite high levels of enthusiasm, there are inevitably a number of challenges involved with AI. Leonard Cornelisse from Unitron explains how the technology requires a large amount of “resources and time in order to acquire and validate the data on a subject. It is critical that it is representative and that it can be used confidently.

“Also, one of the challenges we face right now is that the signal processing or what we call sound cleaning features are well developed,” he continues. “While there are always incremental improvements that can be made, the real opportunity is to make sure they are applied to the right sounds at the right times. The more seamless and transparent the hearing solution and the experience of wearing it becomes, the easier it becomes for people with hearing loss to accept and benefit from it.”

Another hurdle to overcome is located in the physical design of hearing technology and the clash of priorities between AI and the desire for greater miniaturisation. “There is the challenge of using AI within the hearing aid itself because we want lightweight and discrete hearing aids, but technically it takes a lot of battery power to on board the strong algorithms involved in AI,” explains Kim Tilgaard, Vice President of Discovery, Audiology and Embedded Solutions at Demant. “The other challenge is that for AI to be effective the device needs training, it needs to learn, and this requires input from the user – but our philosophy is not to expect too much from the user.”

Erik Harry Høydal from Signia agrees that “to fully leverage such technologies, one has to overcome the limitations on physical hardware. You can currently either do this by massively increasing size or energy consumption, or you can place computation outside the physical hardware. For us the natural step now is to facilitate the latter, until miniaturization has reached a certain level. Another important aspect is to simplify this technology in a manner that everyone can use it, and also minimize the individual effort load to an absolute minimum. People do not love having to rate everything they do, so you need to find ways of doing objective measures/read-outs, or indirectly measure outcome to gain knowledge.”

An inexhaustible future

All manufacturers agree that, despite the challenges, there is vast potential in this area. “The holy grail of goals is always, and especially with artificial intelligence, to better understand (and perhaps even predict) the listener’s intention,” enthuses Leonard Cornelisse from Unitron. “Currently, we use classification of the acoustic environment to discern intent – with speech source as the primary target. In the future, additional sensors, and other detection technology – including the application of artificial intelligence – could be used to more directly infer the listener’s intention, such as head movement, eye movement, pauses in speech/conversation, etc. Currently we collect data about when, where, how an instrument is used – as well as the quality of the listening experience for the wearer. With enough representative and validated data, it could be leveraged to help predict what technology levels could be best used across different experiences.”

“For example, you could infer the potential added value of other solutions, as well as fitting or features, based on knowledge from the data set. At the end of the day, the goal is for people with hearing loss to wake up in the morning and want to put their hearing instrument on right away – not so much because they need it, but because it’s become a part of them. Innovating with artificial intelligence should help take us in that direction.”

Oliver Townend from Widex predicts that “AI will in the future become an even broader applied technology in hearing aids, because it is a fast forward to a better and more personalized sound tailored to the preference of the consumer. We know that sound quality is one of the most important drivers of satisfaction with hearing aids, we also know that different qualities of sound are sought depending on intent and environment. Therefore AI will be an integral part of solutions for the hearing care professionals in the future, to provide more insight and control over more parameters in an easy fashion. AI will also continue to be a key component in making simpler and more tailored sound experiences for the individual consumer.”

A less obvious benefit of AI is also how similar the complex algorithms it uses are to our own brain function, as they are less linear than standard algorithms. Signia’s Erik Harry Høydal is excited by the possibility of what he calls “neural networks.” “This is a technology inspired by how our brain works, with neurons creating new pathways to find new solutions or learn new things. For us, this means we can start questioning audiological established “facts” on how hearing aids should operate in the real world. Specifically, measuring which small adjustment actually improves listening in a given situation. By having a technology that knows the current “laws” but not necessarily follows them, we can gain a completely new knowledge on how hearing compensation works in the real world.

“The fun part starts when we can leverage from all our experience and knowledge in the industry, feed that into the latest AI technologies and see how far that takes us. Everyone struggles in certain listening environments, regardless of hearing abilities. Normal hearing has its limitations, in the future you might be the odd one out not wearing a listening device to a bar with your friends.”


BIHIMA represents the hearing instrument manufacturers of Britain and Ireland, working in partnership with other professional, trade, regulatory and consumer organisations within the health care and charitable sectors. We raise consumer awareness about the latest hearing technology, and aim to influence government and policy makers to improve the lives of people with hearing difficulties.


This article was published in Audio Infos, May 2020.