Skip to main content

LEARNING SPATIAL-TEMPORAL EMBEDDINGS FOR SEQUENTIAL POINT CLOUD FRAME INTERPOLATION

Lili Zhao, Zhuoqun Sun, Lancao Ren, Qian Yin, Lei Yang, Meng Guo

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Poster 11 Oct 2023

A point cloud sequence is usually acquired at a low frame rate owing to the limitations from the sensing equipment. Consequently, the immersive experience of the virtual reality might be greatly degraded. To tackle this issue, a point cloud frame interpolation process can be used to increase the frame rate of the acquired point cloud sequence by generating new frames between the consecutive ones. However, it is still challenging for deep neural networks to synthesize high-fidelity point clouds, especially for those with complex geometric details and large motion. In this paper, a novel frame interpolation network is proposed, which jointly exploits the spatial features and scene flows. The key success of our method lies in the developed spatial-temporal feature propagation module and temporal-aware feature-to-point mapping module. The former effectively embeds the spatial features and scene flows into a spatial-temporal feature representation (STFR). The latter generates a much improved target frame from STFR. Extensive experimental results have demonstrated that our method has achieved the best performance in most cases.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00