How has artificial intelligence use in hearing technology evolved?

BIHIMA interviewed Dr. Dave Fabry, Chief Innovation Officer at Starkey to discuss how artificial intelligence (AI) in hearing technology has evolved, how its use has changed so far, and where AI is taking us in the future.

Dr Dave Fabry, Starkey

A short history of AI in hearing instruments

The start of using AI in hearing instruments was around 15 years ago. Until then, to equip patients with hearing instruments that could work in quiet and noisy listening environments, we gave them multiple manual programmes that the user could access on their hearing instrument, that would engage directional microphones, for example. AI was incorporated after we’d gone from analogue to digital hearing instruments. The very first digital hearing instruments were introduced in the mid-90s.

In the early to mid 2000s we started to see hearing instruments that were able to use machine learning and to be trained to use acoustic environmental classification (AEC) so that patients could go from a quiet to a noisy listening environment, and automatically the hearing instrument would incorporate directionality and noise management appropriate to that specific environment.

I believe, for all of us in the industry, a seminal event was 2014 when the first made-for-iPhone hearing instruments were developed – that was really the beginning of when hearing instruments began to transition from single-purpose devices into multi-purpose/multi-function ones that amplify speech and other sounds to improve audibility, reduce background noises, and allow the user to use an application to turn the volume up and down and to control basic hearing instrument functions.

How can the use of AI in hearing instruments affect user experience?

The hearing industry is on a journey using AI to monitor and improve overall health and wellness of patients. Sensors incorporated into specific hearing instruments are capable of monitoring social engagement, physical activity, and fall detection.

The use of AI in hearing instruments is not only improving audibility, sound quality, and listening experience, but is being used to support overall health and give peace of mind to the user.

What does the future look like: how do you see the use of AI in hearing technology evolving?

We’re working on applications that have more computing power than we can currently use on the ear in hearing instrument circuits.

We continue to see that the automation and use of AI in combination with human intelligence provides the best listening experience. Long term we’re trying to provide as effortless a solution as possible. The future is to put as much on board the hearing instrument as possible but of course always keeping audibility and sound quality as the number one priority.

The one thing that can’t be commoditised is caring, and all these developments will help clinicians care for their patients in a more holistic way.

Click here to read the full interview at Audiology World News.

This technology interview series is a regular feature in Audio Infos Magazine and can be found online at Audiology World News. BIHIMA interviews one of its members in each issue of the magazine on a pressing technology topic effecting the hearing instrument industry today.