NVIDIA AI Podcast: How Deep Learning Can Translate American Sign Language
Deep learning has accelerated machine translation between spoken and written languages. But it’s lagged behind when it comes to sign language.
Now Syed Ahmed, a computer engineering major at the Rochester Institute of Technology, is unleashing its power to translate between sign language and English.
“You want to talk to your deaf or hard of hearing friend, but you don’t know sign language, what would you do in that case?” says Ahmed, a research assistant at the National Technical Institute for the Deaf, in a conversation with Michael Copeland in this week’s episode of the AI Podcast.
Ahmed fed around 1,700 sign language videos into a deep learning algorithm. The model was able to analyze physical movements and translate it into American English.
“You point your phone at your friend and while they sign, automatic captions appear on your phone,” Ahmed said.
Bridging a gap between a visual language and a written language, the model’s possibilities are endless, according to Ahmed. The AI’s ability to map body language can help predict certain health conditions. In an augmented world, phones wouldn’t be a necessary medium with translated words appearing on the side of the speaker’s face.
Testing is still underway. Since the model is as good as the data set, one of the challenges the system has faced is interpreting colloquial terms in ASL. The amount of videos the algorithm has to sort through also causes the output of captions to lag.
“It’s very new right now, in the future we will be making more experiments with decision making tasks,” Ahmed said.
Read the entire article here, AI Podcast: How Deep Learning Can Translate American Sign Language
via the fine folks at NVIDIA.