Skip to Main Content

University Library, University of Illinois at Urbana-Champaign

#FromMarginToCenter: Inclusive Design: Hearing and Speech

The idea of the project was raised by Swetha Machanavajhala, a Microsoft developer who has hearing loss. Her neighbor complained about the loud sound coming from Machanavajhala’s carbon monoxide detector. Thus, Swetha and her team designed an app for connecting people with the world of sound. It uses the people with hearing impaired’s interest in emotional information to visualize the intensity and direction of the sound. And it provides the ambiance of the sounds around them. It also has the function of notifications and speech-to-text.

          

 

In fact, the function of Live Transcribe is extremely simple. The main page is basically nothing but the current transcript of speech. Some deaf friends are not convenient to speak, you can open the built-in keyboard, input their own words to each other to see. Through environmental sound recognition, you can also see the sound of knocking, running water. In our communication with the deaf community, we've learned that sometimes they leave the tap on and leave it on, or someone knocks on the door and doesn't realize it, and that's where it comes in. The advantage of Live Transcribe is its simplicity. There are not uncommon products for voice recognition, but none that are designed specifically for deaf people, that can be opened immediately and transcribed without interference, and that are extremely easy to operate.


 

It's a smart wearable device that helps people who have had their vocal cords removed because of cancer recover the sounds of the past. Approximately 300,000 people worldwide lose their voices every year due to causes such as laryngeal cancer. One way to get your voice back is through a machine called an electrolarynx (EL). One hand is blocked when talking using EL. Moreover, it produces only a monotonous robot-like sound. This cylindrical design has not changed for more than 20 years.To solve these problems, we made Syrinx, a new type of hands-free EL. The neck-brace design enables hands-free usage. To generate a more user-like voice we worked on the vibration pattern of the device. These patterns highly depended on the user's voice. So we used voice processing tools to generate the vibration patterns from the user's voice.

James Dyson Award World Finalist TOP20

    

 

The project is an interactive installation reacting in real time to the uers’ hands, allowing the writing of a sound and light sentence through its movements. With an awareness-raising approach, the goal is not to introduce people with the sign language, but make them feel the articulation and issues of it through accessible representations, for deaf people but also for people who don’t know sign language.

    

 

Sign-IO is an assistive wearable technology that translates sign language to speech. It comprises of a pair of gloves, which capture the sign language gestures. And a companion mobile application that is paired to the gloves via bluetooth. The emobile app vocalizes the signed gestures in real-time therefore enabling seamless communication between sign language users and non-sign language users. Sign language is a form of communication predominantly used by deaf people or those with hearing impairments. However, communicating with people without hearing impairments or with those who are unable to sign can be genuinely difficult. Kenyan inventor Roy Allela sought to solve this problem by creating the Sign-IO glove. Inspired by his deaf niece, Allela created a glove that is able to translate sign language into audible speech. Using integrated sensors, the glove reads the hand movements of the person signing, it then transmits this information via bluetooth to a smartphone.

 

This AI-powered robot comes with an integrated smart home system for seamless and reliable use through the day. One of the components is the hearing clock which wakes you up with vibrations while the Hearingbot smart home system raises the curtains for you. A cool feature is gesture recognition which makes communication easy for those who rely on sign language. The robot can recognize the signs and uses speakers as well as subtitles to communicate with its user. “It interprets sign language of the deaf through motion sensor and projects it into a projector. It can be paired with different products, for example, Hearingbot will manage the cooking status and schedule of the dish while the hearing-impaired person cooks and prepares the dish individually.

    

 

Designed to help the hearing-impaired speak correctly (while also making sure their vocal muscles don’t atrophy with lack of use), Commu is a two-part device designed to capture vocal-chord vibrations and translate them into speech, guiding the user through the process of enunciation and pronunciation. One half of the Commu sits on the throat, with a vibration-sensor capturing the nuances of the waves, to translate them into text. The other half of the Commu docks your phone, allowing it to display your speech waveforms, as the phone’s app uses AI to determine whether the sentences spoken were clear or not. Gradual progress helps users retain powers of speech even though they can’t hear speech on a day-to-day basis!

  

 

The Feeling Mouse, designed specifically for the hearing impaired, appeals to the user’s tactile senses to emphasize the all-important “click” that’s crucial in operating the device. When the user presses down on the mouse, raised bumps slightly protrude through designated holes where the user’s fingers rest to signal the “click.” It’s a simple solution, but incredibly useful for those missing out on this subtle queue.

  

The VV-Talker is a device designed for deaf children to help them overcome their problem in speaking effectively. If you can’t hear the sounds you make, it’s difficult to know if you’re pronouncing it correctly. The device has a screen attached to a wand. Sounds are associated with vibrations. As the child speaks, the device provides feedback on accuracy and teaches the child to speak with the correct “vocal vibrations” to achieve the correct modulation. 

  

 

Acoustic Poetry is an exploration in the design of  products for deaf culture that focus not only on simple functionality but also offer an emotional connection to the acoustic environment that would otherwise be limited. Through the device, the user broadcasts the sounds of the environment that has sparked their curiosity to an interpreter who then responds with a brief verse describing the atmospheric noises. The result is an enriched connection to both everyday experiences and special occasions.

  

 

A view of the ambient noise is what this device hopes to give people with less than a perfect sense of hearing. The fashionable bit of the Danger Alert Enabler, is the wristband, but it also comes with a “micro device.” The way these two bits work together is: sound goes in one, comes out the other. But as the micro device, which sits on your belt, hears sound, it interprets it and translates it to a corresponding pictogram and in some cases, a vibration for warning.

In nature, the device can show you a bird, a sheep, an oncoming thunder or rainstorm, the sound of water, and more. In the way of ambient sounds, the device can show you the telephone, some music, some chattering voices, and more! Then there’s DANGER!

    

 

A common misconception about the hearing impaired is their inability to experience the joy of music. They may not hear and process sound audibly but they certainly can feel it. In fact, studies have shown the sense of touch is heightened allowing them to perceive music in a totally different way. SOUNZZZ is a visual, audio, tactile MP3 player designed for the hearing impaired but universal enough for all to enjoy. Sound is translated into a series of vibrations. One hugs the device to feel the music and it even plays an equalized light show along with the sound. A very unique device I would love to see on store shelves.

  

Verdadero, Marvin S., and Jennifer C. Dela Cruz. 2019. “An Assistive Hand Glove for Hearing and Speech Impaired Persons.” 2019 IEEE 11th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management ( HNICEM ), Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management ( HNICEM ), 2019 IEEE 11th International Conference On, November, 1–6. doi:10.1109/HNICEM48295.2019.9072695.

Fletcher, Mark D, and Jana Zgheib. “Haptic Sound-Localisation for Use in Cochlear Implant and Hearing-Aid Users.” Scientific Reports 10, no. 1 (August 25, 2020): 14171. doi:10.1038/s41598-020-70379-2.

Abraham, Ajish K., and Manohar N. “Efficacy of an Assistive Device for Museum Access to Persons with Hearing Impairment.” Journal of the All India Institute of Speech & Hearing 34 (January 2015): 117–27. http://search.ebscohost.com.proxy2.library.illinois.edu/login.aspx?direct=true&db=eft&AN=119864592&site=eds-live&scope=site.

Nielsen, Annette Cleveland, Sergi Rotger-Griful, Anne Marie Kanstrup, and Ariane Laplante-Lévesque. “User-Innovated EHealth Solutions for Service Delivery to Older Persons With Hearing Impairment.” American Journal of Audiology 27 (November 2018): 403–16. doi:10.1044/2018_AJA-IMIA3-18-0009.

Mielke, Matthias, and Rainer Bruck. “AUDIS Wear: A Smartwatch Based Assistive Device for Ubiquitous Awareness of Environmental Sounds.” Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference 2016 (August 2016): 5343–47. doi:10.1109/EMBC.2016.7591934.