TRANSFER LEARNING FOR VIDEOS: FROM ACTION RECOGNITION TO SIGN LANGUAGE RECOGNITION
Noha Sarhan, Simone Frintrop
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 13:51
In this paper, we propose using Inflated 3D (I3D) Convolutional Neural Networks for large-scale signer-independent sign language recognition (SLR). Unlike other recent methods, our method relies only on RGB video data and does not require other modalities such as depth. This is beneficial for many applications in which depth data is not available. We show that transferring spatiotemporal features from a large-scale action recognition dataset is highly valuable to the training for SLR. Based on an architecture for action recognition \cite{carreira2017quo}, we use two-stream I3D ConvNets operating on RGB and optical flow images. Our method is evaluated on the ChaLearn249 Isolated Gesture Recognition dataset and clearly outperforms other state-of-the-art RGB-based methods.