How hearing instruments are using ‘deep neural networks’

BIHIMA interviews Thomas Behrens from Oticon

 

Read the first in BIHIMA’s 2021 interview series for the UK audiology magazine Audio Infos. We interviewed Thomas Behrens, Senior Director of Oticon’s Centre for Applied Audiology Research, on the subject of deep neural networks and how they can improve the sound experience of hearing devices. We explore some of the technology advances in AI that patients are benefiting from in their hearing instruments.

“We know so much more about the auditory centre in the brain than before and this has influenced our technology dramatically. We now know that the brain orients first and creates a full ‘sound scene’, it then identifies what it wants to attend to, so it can focus in. We have optimised our technology to reflect these insights.

“The hearing aid learns in the same way we learn as humans. i.e. when we sleep our brain learns from all the experiences heard in that day. Similarly, the deep learning algorithm integrates learnings into the Deep Neural Network and then assimilates them.  In this way we enable users to hear so much more than we could in the past.”

Thomas Behrens, Oticon

“The hearing aids we are now using give more contrast, they register more detail, so users can hear all the smaller sounds that ensure an experience of a full sound scene.” Thomas Behrens, Oticon

 

Read the full interview online at Audiology World News.

This 2021 technology interview series is a regular feature in Audio Infos Magazine and can be found online at Audiology World News. BIHIMA interviews one of its members in each issue of the magazine on a pressing technology topic effecting the hearing instrument industry today in an illuminating series.