Skip to main content

Key Action And Joint Ctc-Attention Based Sign Language Recognition

Haibo Li, Liqing Gao, Ruize Han, Liang Wan, Wei Feng

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:32
04 May 2020

Sign Language Recognition (SLR) translates sign language video into natural language. In practice, sign language video, owning a large number of redundant frames, is necessary to be selected the essential. However, unlike common video that describes actions, sign language video is characterized as continuous and dense action sequence, which is difficult to capture key actions corresponding to meaningful sentence. In this paper, we propose to hierarchically search key actions by a pyramid BiLSTM. Specifically, we first construct three BiLSTMs to produce temporal relationships among input video sequence. Then, we associate these BiLSTMs by searching the salient responses in two groups of fixed-scale sliding window and capture key actions. Additionally, in order to balance the sequence alignment and dependency, we propose to jointly train Connectionist Temporal Classification (CTC) and Long Short-Term Memory (LSTM). Experimental results demonstrate the effectiveness of the proposed method.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00