What is a Hearing Aid Deep Neural Network?

What is a Hearing Aid Deep Neural Network?

Hearing aids have come a long way in addressing the challenges faced by those with hearing loss. We've seen remarkable progress in technology, especially when it comes to tackling the primary concern of understanding speech in noisy environments.

The advent of deep neural networks marks a significant leap forward in hearing aid capabilities. This artificial intelligence approach allows for unprecedented separation of speech from background noise, enhancing the listening experience for users. As manufacturers continue to invest heavily in research and development, we're witnessing a new era of hearing assistance that promises to revolutionize how people with hearing loss navigate their auditory world.

The Challenge of Hearing in Noisy Environments

Hearing in background noise presents a significant hurdle for individuals with hearing loss. This issue consistently ranks as the top concern among those struggling with auditory impairments. While hearing aid manufacturers have made strides in developing various features, their primary focus remains on tackling this crucial problem.

Leading companies in the industry invest substantial resources into research and development, aiming to enhance speech understanding in noisy situations. Since the mid-1990s, digital signal processing has driven steady improvements in hearing aid performance. Satisfaction rates among users have risen from 54% to 83% by 2022, reflecting these advancements.

Several factors contribute to this increased satisfaction, including improved audiological care practices and technological innovations. Noise reduction, directional microphones, and remote microphone technology have all played a role in enhancing the user experience. Despite these advancements, progress in hearing aid performance in noisy environments had begun to plateau, with only minor improvements between generations.

The introduction of artificial intelligence, specifically deep neural networks (DNN), in January 2021 marked a turning point in addressing this challenge. DNNs represent a significant leap forward in the ability to separate speech from background noise, offering new hope for those struggling in noisy environments.

Traditional digital signal processing relied on human-designed algorithms to differentiate between speech and noise. Skilled engineers created these systems, but their ability to identify and define the countless acoustic differences between various sounds was limited. DNNs, however, can be trained on millions of sound samples, learning to distinguish speech from noise with far greater accuracy and adaptability.

Advancements in Hearing Aid Technology and User Satisfaction

Hearing aid technology has come a long way since the mid-1990s when digital signal processing revolutionized the industry. We've seen remarkable progress in user satisfaction rates, climbing from 54% in 1996 to an impressive 83% by 2022. This significant improvement can be attributed to several factors, including enhanced audiologic care practices and technological innovations.

Key advancements in hearing aid technology have included:

  • Noise reduction systems
  • Directional microphones
  • Remote microphone technology

These features have greatly improved the listening experience for many users. However, despite these developments, progress in background noise performance began to plateau in recent years.

The game-changer arrived in January 2021 with the introduction of deep neural networks (DNNs) in hearing aids. This form of artificial intelligence has transformed how hearing aids process sound in noisy environments.

DNNs work by mimicking the human brain's neural connections. They are trained on millions of sound samples, learning to distinguish between speech and noise with incredible accuracy. This approach far surpasses traditional methods that relied on human-engineered algorithms to define acoustic differences.

The Rise of Advanced Neural Networks in Hearing Technology

Defining Neural Networks and Their Impact on Hearing Aids

Neural networks are revolutionizing hearing aid technology. These sophisticated algorithms mimic the human brain's structure, using interconnected layers to process and interpret complex sound information. By training on vast datasets of audio samples, neural networks can distinguish speech from background noise with unprecedented accuracy. This capability addresses one of the most significant challenges faced by hearing aid users - understanding conversations in noisy environments.

Advancements Over Previous Sound Separation Methods

Traditional sound processing in hearing aids relied on human-designed algorithms to differentiate speech from noise. While effective to a degree, these methods had limitations in complex acoustic environments. Neural networks surpass these older techniques by identifying subtle patterns and characteristics in sound that human engineers might overlook. This results in more precise and adaptable noise reduction, significantly improving speech clarity for users.

Neural Networks Beyond Hearing Aids

The application of neural networks extends far beyond hearing technology. In postal services, these systems accurately decipher handwritten addresses, demonstrating their versatility in pattern recognition tasks. This ability to learn and adapt to varied inputs makes neural networks invaluable across numerous fields, from image recognition to language processing.

Deep Neural Networks Revolutionize Hearing Aid Performance

Overcoming Performance Plateaus

Hearing aid technology has seen significant advancements since the mid-1990s, with digital signal processing leading to steady improvements. User satisfaction rates climbed from 54% in 1996 to 83% by 2022. This progress can be attributed to better audiological care practices and innovations like noise reduction and directional microphones. Yet, despite these gains, hearing aid effectiveness in noisy environments began to level off, with only minor enhancements between product generations.

Breakthrough in Speech-Noise Separation

Deep neural networks (DNNs) have transformed hearing aid capabilities, particularly in distinguishing speech from background noise. Traditional digital signal processing relied on human-designed algorithms to differentiate speech from noise, which had limitations. DNNs, however, mimic the human brain's neural connections and can be trained on millions of sound samples. This allows them to identify subtle distinctions between speech and noise far more effectively than conventional methods.

Innovative Hearing Aid Technologies: Real-World Applications

Oticon's Neural Network Enhancements

Oticon has taken significant strides in improving their deep neural network technology. They've upgraded their original system to version 2.0, demonstrating a commitment to continuous improvement. This latest iteration builds upon the foundation of their initial deep learning algorithm, processing sound samples more effectively to distinguish speech from background noise.

The enhanced neural network allows for better adaptation to various acoustic environments, potentially offering users clearer communication in challenging listening situations. By refining their AI-driven approach, Oticon aims to provide more natural and comfortable hearing experiences.

Phonak's Revolutionary Infinio Sphere

Phonak introduced the Infinio Sphere hearing aid in August 2024, marking a significant leap in hearing technology. This device utilizes a dual-chip system:

  1. A-chip: Handles general sound processing
  2. Deep Sonic chip: Dedicated to speech-noise separation

The Deep Sonic chip, powered by a specialized deep neural network, focuses exclusively on isolating speech from background noise. This targeted approach allows for remarkable clarity in noisy environments.

Phonak has released audio samples demonstrating the Infinio Sphere's capabilities:

Before ProcessingAfter ProcessingMuffled speech with prominent background noiseClear speech with significantly reduced noise

Starkey's Edge AI Integration

Starkey launched their Edge AI hearing aid in October 2024, incorporating deep neural network technology into their core processing. Unlike their previous models that required manual activation of AI features, the Edge AI operates continuously.

Key features of the Edge AI include:

  • Constant AI-driven sound processing
  • Automatic adaptation to different acoustic environments
  • Improved speech clarity in noisy settings

The Crucial Role of Expert Hearing Aid Fitting

Professional hearing aid fitting is essential for optimal performance. We've seen remarkable advancements in hearing aid technology, particularly with deep neural networks revolutionizing speech recognition in noisy environments. These sophisticated AI systems can separate speech from background noise more effectively than ever before.

Major manufacturers like Oticon, Phonak, and Starkey have invested heavily in this technology. Phonak's Infinio Sphere hearing aid, released in August 2024, uses two chips: one for general sound processing and another dedicated to speech-noise separation. Starky's Edge AI hearing aid, launched in October 2024, employs a deep neural network continuously.

While these innovations are impressive, their effectiveness relies heavily on proper fitting and programming by a qualified hearing care professional. Even the most advanced AI can't compensate for improper setup. Expert fitting ensures the device is tailored to your specific hearing needs and lifestyle.

A professional fitting typically involves:

  1. Comprehensive hearing assessment
  2. Custom programming of the hearing aid
  3. Real-ear measurements for accurate sound delivery
  4. Counseling on usage and maintenance

We can't overstate the importance of working with a skilled audiologist. They'll fine-tune your hearing aids to maximize the benefits of these cutting-edge technologies, helping you hear clearly in challenging environments.

Video transcript

This is the future of hearing aid technology and it's happening right now.

Hey guys, Cliff Olson, Doctor of Audiology and founder of Applied Hearing Solutions in Phoenix, Arizona. And in this video I'm talking about Deep Neural Networks and how this form of Artificial Intelligence is completely changing. How well hearing aids perform in background noise?

Ask anyone with hearing loss, and I guarantee that they tell you that the number one challenge they face due to their hearing loss is hearing in background noise. And while many people feel like hearing aid manufacturers spend way too much time developing cool features inside of hearing aids like LE Audio, Bluetooth tap controls and remote programming, I can tell you that these manufacturers are spending way more time trying to solve this problem of hearing and background noise.

Virtually all of the technological innovation in the hearing aid industry is being led by a few manufacturers, which includes Sonova, Damont, Starkey, GN, and WSAudiology. Combined. These hearing aid manufacturers spend hundreds of millions of dollars every single year on research and development to solve this top priority that you have as an individual with hearing loss, which is making sure that you can go into a noisy situation and actually understand the people talking to you.

To provide you with some historical context here, dating back to the mid 1990s when digital signal processing actually took off, hearing aid performance has been steadily improving ever since. In fact, according to market track data since the beginning of this digital hearing aid revolution in 1996, hearing aid satisfaction rates have been increasing dramatically from 54% to 83% satisfaction by 2022, which is nearly a 30% increase.

A big reason for this increase in hearing aid satisfaction is probably due to several things, including best practice Audiologic care, which includes test box measures, real ear measurements, and validated outcome measures. But it also has to do with technological innovation like noise reduction, directional microphones and remote microphone technology.

However, even with these improvements in digital hearing aid technology over the past several decades, hearing aid performance in background noise has started to stagnate leading to only incremental improvements from generation to generation of new technologies. That is until artificial intelligence in the form of Deep Neural Networks started hitting the scene in January of 2021.

But before I tell you exactly how Deep Neural Networks completely changed the game for hearing aid users in background noise, do me a huge favor and click the like button. It really helps out the channel, and please subscribe to the channel with notifications turned on if you haven't done so already. That being said, it's greatly appreciated and go ahead and leave me a comment in the comment section if this is the first time that you're hearing about Deep Neural Networks inside of hearing aids.

Okay, let's go ahead and talk about a hearing aid's biggest limitation when it comes to separating speech from background noise. When you go to someplace noisy, like a noisy restaurant, all of this speech is mixed in with all of the background noise, and someone with normal hearing can typically separate out the speech from the background noise thanks to their healthy auditory system.

However, if you have damage in your auditory system or if you have cognitive struggles, you will not be able to separate speech from background noise as efficiently as someone with normal hearing up until now. Hearing a digital signal processing relied on an audio engineer's ability to define the differences between speech information and noise information inside of a manmade computer algorithm, this means that your hearing aids ability to identify speech and identify noise and then do the separation of the two so you only get the speech is heavily dependent on a human engineer's ability to define the differences between them acoustically.

And while hearing aid engineers are extremely smart and they're way smarter than me, we'll just put that out there. Their ability to identify these different characteristics is limited. I mean, there may be millions or billions of differences between these sounds in different environments, and it's just unrealistic to expect that a human engineer would be able to identify all of those different characteristics, enter deep neural networks or DNN for short.

Think of a Deep Neural Network as an artificial human brain with millions of artificial neural connections that can be trained with data and is able to perform deep learning. Essentially, engineers create this Deep Neural Network machine learning computer algorithm and feed millions of different sound samples into it. As these sounds pass through these neuron layers, the algorithm learns the different characteristics of these sounds, which helps it identify what makes speech, speech, and noise noise.

The more sound information you feed into one of these deep neural networks, the smarter it gets and the better it gets at separating speech from background noise.

For example, the United States Postal Service uses a trained Deep Neural Network to identify handwritten addresses on letters to make sure that those letters get sent to the right places. Just imagine the variability of handwriting between different people. Some people write really sloppy like myself, and some people write very nicely, like pretty much everyone else on my staff that reminds me that my handwriting's so bad not to mention two different postal workers could read that address differently, and that letter could end up in two completely different places depending on who reads it.

However, a deep neural network trained with millions of different handwriting samples could more accurately identify the proper address to send that letter to better than a human could. Even if a computer algorithm was developed by a human engineer to identify these different characteristics of handwriting, there is no way that they would be able to identify as many as what a Deep Neural Network could.

Of course the same is true when it comes to hearing aid sound processing algorithms, and the best news is hearing aid Deep Neural Networks continue to get even better. Not only has major hearing aid manufacturer, Oticon already done an upgrade of their original deep neural network to version 2.0, but other hearing aid manufacturers are starting to get in on the fun, including Phonak and Starkey.

Phonak took their Deep Neural Network to an entirely different level with the release of their Infinio Sphere hearing aid in August of 2024. The Sphere Infinio hearing aid actually uses two different chips. The Deep Neural Network trained DEEPSONIC chip in addition to their ERA chip. The ERA chip handles the sound processing of most situations while the DEEPSONIC chip is 100% dedicated to separating speech from background noise. The result is a hearing aid that is pretty much designed to do one thing separate speech from noise, and it does a very good job of it.

[Sound Sample Demonstration]

Phonak actually has a sound sample of their DEEPSONIC chip in action. Go ahead and check it out.

"This was like play song. And I was like that. So behind the beat, 'cause when I press it down, all of a sudden the pitch shift stuff was being behind by like 10 milliseconds. I mean, it's enough to make the feel off, you know, it's"

Okay. Clearly that was very impressive. Just keep in mind that the performance of this Deep Neural Network is heavily dependent on how well your hearing care professional fits and programs your hearing aid for you.

I'll go ahead and elaborate on that a little bit more here in just a minute, but before I do that, I wanna talk about Starkey because they just released their Edge AI hearing aid in October of 2024. Starkey has previously used Artificial Intelligence inside of their hearing aid technology, but you had to activate a feature called Edge Mode for it to work its best in a background noise situation.

Now with this new hearing aid released by Starkey, they are using a deep neural network all of the time to improve your ability to separate speech from background noise without always having to activate edge mode. I've got an audio sample for that as well.

[Sound Sample Demonstration]

"Our goal at Starkey is the fear from hearing aid users that they'll miss out on one of life's most important moments due to running out of battery life in their hearing aids. And considering that we wanted to go and look at what's the longest day that a hearing aid user might expect to wear their devices."

Now, Starkey's Edge AI hearing aids still have Edge Mode plus in their hearing aids that you can activate for an additional benefit in separating speech from background noise. Of course, the same caveats exist with a Starkey deep neural network as they do with a phone act or Oticon Deep Neural Network, which is their performance heavily depends on how well your hearing care professional fits and programs these devices for you.

And ultimately, this comes down to if your hearing care professional follows audiologic best practices or not Best practices are a series of considerations and procedures that must be completed by your hearing care professional in order for you to receive the maximum amount of benefit from your hearing aids.

And the best way to find a hearing care professional who follows these best practices is to go to my website hearing up.com and look a hearing up network member in your area. All hearing up members have been vetted and are committed to following comprehensive Audiologic best practices, including test box measures, real ear measurement and validation outcome measures to ensure you hear your best no matter what Deep Neural Network you're hearing aids use.

So if hearing your absolute best is what's most important to you, then make sure you go to a hearing up network member in your area.

Deep Neural Networks have significantly improved the performance of hearing aids and background noise, and it's not even a debate at this point, and it's even able to do this without having to rely heavily on directional microphones that we previously had to use inside of hearing aids to improve signal to noise ratio.

Each of these deep neural networks are getting you between 11 and 13 decibels of advertised signal to noise ratio improvements, which is significantly more than what you can get with a hearing aid that does not use a Deep Neural Network.

In fact, every single patient that I have seen inside of my clinic that is upgraded to a hearing aid that uses a Deep Neural Network, has seen a significant improvement in performance when it comes to background noise situations. And the one thing that people don't talk about is that these Deep Neural Networks are actually making speech more clear in quiet situations as well.

It's getting to the point where eventually every hearing aid manufacturer is going to have to have a hearing aid that has been trained by a Deep Neural Network if they wanna stay relevant in the global hearing aid market overall, Deep Neural Networks are completely changing the way that hearing aids are processing sound, and it's mind blowing to see how far hearing aid technology has come even since I entered the industry back in 2012.

And now that you know what a Deep Neural Network is, how it works and what it's capable of, let me know if you're just as impressed as I am or if you think it's just a bunch of hype down in the comment section.

Related videos

Related articles

Starkey Edge AI: Dr. Cliff's Review
Starkey Edge AI: Dr. Cliff's Review

Starkey Edge AI offers neural network processing, exceptional battery life, and advanced connectivity with proper professional fitting.

Read More
The Best Affordable Hearing Aids: An Audiologist's Take
The Best Affordable Hearing Aids: An Audiologist's Take

Four budget-friendly hearing aids deliver core features when properly fitted by qualified hearing care professionals.

Read More
Best Hearing Aids of 2025: A Doctor's Opinion
Best Hearing Aids of 2025: A Doctor's Opinion

Dr. Cook highlights five premium hearing aids offering advanced AI, connectivity features, and improved speech clarity.

Read More
Intent vs Edge AI Hearing Aids Comparison by Dr. Cliff
Intent vs Edge AI Hearing Aids Comparison by Dr. Cliff

Comprehensive comparison of Oticon Intent and Starkey Edge AI hearing aids' features, performance, and technological differences.

Read More
The 7 Best Hearing Aid Features of 2025, According to Dr. Cliff
The 7 Best Hearing Aid Features of 2025, According to Dr. Cliff

Seven best hearing aid features include precise programming, Bluetooth, AI, auto-adjustment, rechargeability, feedback suppression, and custom earmolds.

Read More
Are Audiologists Better Than Hearing Instrument Specialists?
Are Audiologists Better Than Hearing Instrument Specialists?

Professional credentials matter less than following Best Practices in hearing care, which includes proper testing, verification, and patient follow-up.

Read More