Manet: Mitral Annulus Point Tracking Network in Cardiac Magnetic Resonance
Jianguo Chen, Xulei Yang, Shuang Leng, Ru-San Tan, Zeng Zeng, Liang Zhong
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:15:04
in this paper, we propose a deep-learning-based approach for the no-reference point cloud quality assessment method that benefits from the advantage of the self-attention mechanism in transformers to accurately predict the perceptual quality score of each degraded point cloud. Additionally, we introduce visual salience to reflect the behavior of the human visual system that is attracted to some specific regions compared to others during the evaluation. To this end, we first render 2D projections (views) from a 3D point cloud from different viewpoints. Then we weight the obtained projected images with their corresponding saliency maps. Further, we use the resulting salient images as a sequential input of the transformer encoder to evaluate the global visual quality information of each view. Finally, the point cloud quality score is obtained by averaging the scores of all the projected views. We evaluate the performance of our model, with the well-known ICIP2020 and SJTU databases. Experimental results show that our model achieves promising performance compared to the state-of-the-art point cloud quality assessment metrics.