Skip to main content

Deep Hashing For Motion Capture Data Retrieval

Na Lv, Ying Wang, Zhiquan Feng, Jingliang Peng

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:31
10 Jun 2021

In this work, we propose an efficient retrieval method for human motion capture (MoCap) data based on supervised deep hash code learning. Raw Mocap data is represented into three 2D images, which encode the trajectories, velocities and self-similarity of joints respectively. Such image-based representations are fed into a convolutional neural network (CNN) adapted from the pre-trained VGG16 network. Further, we add a hash layer to fine-tune the CNN and generate the hash code. By minimizing the loss defined by classification error and constraints on hash codes, highly discriminative hash representations of the motion data can be generated. As experimentally demonstrated on the public HDM05 data set, our algorithm achieves high accuracy comparing with the state-of-the-art MoCap data retrieval algorithms. Besides, it achieves high efficiency due to the fast matching of hash codes.

Chairs:
William PUECH

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00