-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:02:15
Inspired by the recent Neural Radiance Field (NeRF) work, Implicit Neural Representation (INR) has widely received attention in Sparse-View Computed Tomography (SVCT) reconstruction tasks as a self-supervised deep learning framework. INR-based SVCT methods represent the desired CT image as a continuous function of spatial coordinates and train a Multi-Layer Perceptron (MLP) to learn the function by minimizing loss on the SV sinogram. Benefiting from the continuous image function provided by INR, high-quality CT image can be reconstructed. However, existing INR-based SVCT methods suppose by default that there is no relative motion during CT image acquisition. Therefore, such methods suffer from severe performance drops for real SVCT imaging with even minor subject motion. In this work, we propose a self-calibrating neural field to recover the artifacts-free image from rigid motion-corrupted SV sinogram without using any external training data. Specifically, we introduce a transform matrix for each projection pose in the sinogram respectively, to present the the inaccurate projection poses caused by subject rigid motion. Then we optimize these transformation matrices and the CT image jointly for achieving rigid motion corrected CT image reconstruction. We conduct numerical experiments on a public CT image dataset. The results indicate our model significantly outperforms two representative INR-based methods for SVCT reconstruction tasks with four different levels of rigid motion.