Trying to find sound solutions

/

By 2050, around 2.5 billion people worldwide are expected to be impacted by hearing loss. This projection is mainly driven by the expected population growth and ageing. Hearing loss can result in difficulty following conversations and social isolation. In this blog, we explore what’s really important in understanding spoken language, and speculate on sound solutions of the future!

Hearing-loss interventions
Efforts to reduce the burden of hearing loss are gaining momentum, but finding solutions accessible in areas with limited healthcare remains a challenge. Fortunately, there are some promising preventive measures like vaccines for measles and rubella, and treatments for ear infections.

In addition, embracing sign language and lip-reading can offer meaningful ways to communicate without sound. This could involve integration into the local Deaf community and may be a beautiful solution in itself that doesn’t require technologies like hearing aids and cochlear implants. Here, however, we focus on the latter, and explore their potential and limitations in restoring access to sound.

What are the differences between hearing aids and cochlear implants?
Hearing aids are small devices worn in or behind the ear. They are typically used for people with mild to moderate hearing loss and work by making sounds louder. On the other hand, cochlear implants are devices that directly stimulate the auditory nerve in the spiral-shaped inner ear, known as the cochlea. Inside the cochlea, tiny hair cells are busy converting sound vibrations into electrical signals for the brain. When these hair cells are damaged, often due to noise exposure or ageing, the brain can no longer receive crucial sound information, resulting in hearing loss. Enter cochlear implants—used for those with severe to profound hearing loss who find hearing aids unhelpful. These devices bypass the damaged hair cells entirely, allowing individuals to reconnect with the world of sound!

In other words, hearing aids amplify sounds, while cochlear implants directly stimulate the auditory nerve to provide a sense of sound. This requires cochlear implants to be surgically implanted. Generally, hearing aids are more common, because they are suitable for a broader range of individuals and do not require surgery.

Figure 2. Hearing aid
Figure 3. Cochlear implant

 

 

 

 

 

 

The evolution of hearing aids: from trumpet to digital
Earliest models of hearing aids date back to the 17th century, and consisted of simple devices like ear trumpets and speaking tubes. The idea behind these devices is that it is easier to catch sound waves with a funnel shape, such as when you are cupping your hands around our ears.

Figure 4. Ear trumpet

 

 

 

 

 

 

 

 

We’ve come a long way since the early days of hearing aids. Hearing aids are now equipped with an electrical component. Instead of just funnelling sounds to the ear, a microphone picks up the sound, an amplifier increases the volume, and a speaker delivers the amplified sound to the ear. Another big change is that today’s hearing aids are digital, not analog. This means that instead of amplifying all sounds equally (like turning up a radio), digital hearing aids convert sounds into numbers, allowing for more advanced processing (like focusing on enhancing the specific sounds you want to hear more clearly). Plus, modern hearing aids have longer battery life and more comfortable and portable designs. They also come with a Bluetooth connection. This lets you take phone calls, listen to music, or watch TV directly through your hearing aids.

While hearing aids can improve hearing experience, they may not always provide the other support that our brain needs to make sense of speech in difficult conditions (like a noisy environment). So, how exactly does the brain interpret speech? Let’s dive into that next!

Can our brain also turn up the volume?

Our brain is an extraordinarily complex decoding machine. In a nutshell, when processing speech, the brain converts sound waves received by the auditory nerve into electrical signals. These signals are then analysed in the auditory cortex, where the brain distinguishes words and sentences from background noise. To comprehend the meaning of what we hear, the brain uses context (information coming through the senses from the environment) and world knowledge (interpretation in the light of previous experiences). When listening to speech, we are also actively predicting what will be said next. For example, if someone says “It’s so warm outside, I would like to eat an…”, the listener may predict that the speaker’s next word will be “ice cream”. This allows you to follow conversations more smoothly and to provide quick responses (see also https://www.mpi-talkling.mpi.nl/?p=2122&lang=en).

People with a hearing impairment receive distorted auditory information, which means they have to apply extra effort to understand what is being said, especially in noisy environments. Something that may help is to rely more on these predictions to get ahead of the game. Now the question is, could future hearing aids anticipate speech like humans do, in order to further assist hearing impaired users?

Hearing aids of the future

Artificial intelligence (AI) has already revolutionised hearing technology and promises to continue making significant improvements in the coming years. To a certain extent, AI hearing aids are designed to work much like our brains, analysing sound patterns and context to predict and enhance speech. They use sophisticated algorithms to adapt in real-time, focusing on important sounds and reducing background noise, just as our brains do when helping us understand conversations. Unlike traditional digital hearing aids, which have preset modes for different environments (like ‘home’ or ‘restaurant’ modes), AI hearing aids can adapt more flexibly to specific situations. For example, they can recognize when someone is speaking through a mask and adjust the sound accordingly, or focus on the voices you hear most often, such as those of a romantic partner, close friends, or family members.

Looking ahead, future hearing aids could incorporate even more advanced technologies like neural interfaces and augmented reality. Imagine a non-invasive brain-computer interface that synchronises with your brain’s natural auditory processing, enhancing real-time adaptation to different environments through a continuous feedback loop. Augmented reality could add another layer of assistance by creating 3D auditory maps for better spatial awareness and providing visual cues like subtitles through special glasses. Advanced AI algorithms would be able to predict and anticipate speech patterns and environmental sounds, using sensors to understand the context and adjust settings automatically. Machine learning would develop personalised sound profiles based on your daily routines and preferences.

There are also potential drawbacks to consider. Errors in AI predictions such as misinterpreting speech or environmental sounds could lead to a lot of confusion. While our brains have evolved over millions of years to handle complex situations, AI relies on its programming and data, which means it can have difficulty managing unexpected situations. Technical glitches could also be challenging if a user is dependent on it. In any case, as AI and cutting-edge technologies continue to evolve, sound solutions of the future look incredibly promising. These technologies can complement cultural and community-based strategies, like sign language or lip-reading, to provide diverse options tailored to individual needs.

This blog post was inspired by the Cabaret of Dangerous Ideas show ‘Do you know what comes next?’ by Muzna Shehzad and Naomi Nota at the Edinburgh Fringe Festival on August 7th at 1:40PM in The Stand Comedy Club. To read more about their show, see also the following blog post.

For more information on the research project “Predicting language under difficult conditions: Effects of cognitive load, noise, and hearing impairment,” see also the Economic and Social Research Council Grant proposal (Reference ES/X001148/1).

References