The Research team from Microsoft’s labs is all set to launch the latest feature of Kinect, which allows it to recognize the sign language and translate it into text. The researchers at Microsoft Research Asia have teamed up with Institute of Computing Technology at the Chinese Academy of Sciences to bring this feature live.
The feature is although at its early stages of development but the software giant has released a demo video of the same. This is the most amazing feature of Kinect so far as it may allow the hard-to-hear people to interact with their computers in their native sign language.
The team of researchers in Microsoft has been studying the sign language recognition for a long time to bring up this feature. After trying the special input sensors and a special web camera, the team has finally explored the body tracking abilities of Kinect which is used to enable this amazing feature of sign-language recognition. The web camera in Kinect is able to track the hand movements of the hearing impaired computer user. The special Windows software will then read and analyze the most relevant word to the hand and fingers movements.
Explaining the new project Microsoft says, “The words are generated via hand tracking by the Kinect for Windows software and then normalized, and matching scores are computed to identify the most relevant candidates when a signed word is analyzed.”
As described in the official blog post, this software project current can support only American Sign Language; it will soon be updated to include the sign languages from other parts of the world.