Google has announced that it has become possible for a smartphone to interpret and convert sign language and gestures into speech.
Whilst the global tech firm has not made an application of its own out of these algorithms, it hopes that developers will be keen to use such algorithms in order to develop their own applications utilising this new technology.
Up until this discovery, this type of software was only used on PCs.
Whilst many have said that the technology used might find it hard to fully understand some of the gestures and conversations that will be made, campaigners from the hearing-impaired community have gladly welcomed the move.
Writing in an AI blog, Google research engineers Valentin Bazarevsky and Fan Zhang, said that the main purpose of this freely published technology was to become “the basis for sign language understanding.”
The researchers worked with image software company MediaPipe in order to create the algorithms.
A spokeswoman for Google told the press that “We’re excited to see what people come up with. For our part, we will continue our research to make the technology more robust and to stabilise tracking, increasing the number of gestures we can reliably detect.”
Whilst many have said that an application that can recognise such gestures would still be quite inaccurate due to it not recognising the facial expressions and speed of the signs, which have a distinct effect on the meaning of what is being said, Google has ensured everyone that this is still the first step in what will be a long journey.
Most notably, any regional signs that are not recognised as part of the global sign language would not be included in the application.
The cutting-edge technology, which is also able to track two hands, has managed to receive the support of Jesal Vishnuram, the technology manager of Action on Hearing Loss, yet it will still need improvements, saying that “From a deaf person’s perspective, it’d be more beneficial for software to be developed which could provide automated translations of text or audio into British Sign Language (BSL) to help everyday conversations and reduce isolation in the hearing world.”
This is not the first type of technology to be developed aimed at converting sign language into speech.
Last year, Microsoft along with the National Technical Institute for the Deaf, managed to utilise desktop computers in classrooms in order to aid students that have hearing disabilities through a presentation translator.
In the past, many students ended up missing important information as they had to constantly switch their attention towards what was being written on the board and the human sign language interpreters.