Using Deep Learning to Build a Better Hearing Aid


By Lionel Gibbons | January 26, 2017 | deep learning




The world is a noisy place. On a busy city sidewalk, a pedestrian might hear the honking of car horns, the wailing of distant sirens, the tinny tinkle of an ice cream truck, the bass thud of music from an upstairs apartment, the barking of dogs, etc. Now imagine that this pedestrian is hard of hearing and trying to carry on a conversation with the person walking next to her. Conventional hearing aids would amplify all of those sounds in a way that produces garbled noise, drowning out the desired sounds of the conversation. But thanks to Deep Learning, there is a better method under development.

Scientists at the Center for Cognitive and Brain Sciences at Ohio State University have developed a better hearing aid by building on the work of early machine learning pioneers. These scientists began by defining "speech" versus "noise" by looking at properties such as frequency and intensity. After developing these algorithms, they fed samples of noisy speech to their deep neural network.  If the system incorrectly identified speech as noise or vice versa, scientists adjusted the neural network's parameters and let it learn from its mistakes.  When it was ready, scientists tested the audio filtering by asking human test subjects to pick out a sentence from background noise, using two backgrounds: a steady humming noise similar to a running refrigerator and a babble of voices designed to mimic a cocktail party.  In both cases, testing showed excellent results, and a better hearing aid was born.

Every day, scientists are harnessing the power of deep learning to make technology advancements. Bright provides everything you need to get your deep learning environment up and running, and manage it. We'll give you a choice of frameworks and Machine Learning libraries, as well as the modules to support it. Get in touch to find out more.