SENSORS TO SIGN LANGUAGE: A NATURAL APPROACH TO EQUITABLE COMMUNICATION
Thomas Fouts, Ali Hindy, Chris Tanner
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:39
Sign Language Recognition (SLR) aims to improve the equity of communication with the hearing impaired. However, SLR typically relies on having recorded videos of the signer. We develop a more natural solution by fitting a signer with arm sensors and classifying the sensor signals directly into language. We refer to this task as Sensors-to-Sign-Language (STSL). While existing STSL systems demonstrate effectiveness with small vocabularies of fewer than 100 words, we aim to determine if STSL can scale to larger, more realistic lexicons. For this purpose, we introduce a new dataset, SignBank, which consists of exactly 6,000 signs, spans 580 distinct words from 15 different signers, and constitutes the largest such dataset. By using a simple but effective model for STSL, we demonstrate a strong baseline performance on SignBank. Notably, despite our model having trained on only four signings of each word, it is able to correctly classify new signings with 95.1% accuracy (out of 580 candidate words). This work enables and motivates further development of lightweight, wearable hardware and real-time modelling for SLR.