What is it about?
This paper proposed a sparse-view CT reconstruction methods which improve the reconstruction performance by selection projection views.
Featured Image
Why is it important?
Sparse-View CT (SVCT), which provides low-dose and high-speed CT imaging, plays an important role in the medical imaging area. As the decrease of projection views, the reconstructed image suffers from severe artifacts.
Perspectives
Recent works utilize deep learning methods to improve the imaging quality of SVCT and achieve promising performances. However, these methods mainly focus on the network design and modeling but overlook the importance of choosing projection views. To address this issue, this paper proposes a Projection-view LeArning Network (PLANet), which can estimate the importance of different view angles through reconstruction network training and select the projection views for high-quality image restoration. Specifically, we generate synthesized sparse-view sinograms by subsampling projections from full-view sinograms based on a learnable distribution, which can be learned through reconstruction network training. Thus, important image views can be selected to acquire sparse-view projection in imaging equipment. Furthermore, effective data augmentations are provided by the online generation of sparse-view sinogram to improve the stability and performance of reconstruction networks. In short, our method can select the important projection views and learn high-performance reconstruction networks in one unified deep-learning framework. Comprehensive experiments show that the proposed method achieves promising results compared to state-of-the-art methods, and the ablation studies also show the superiority of our proposed PLANet in terms of effectiveness and robustness.
Liutao Yang
Nanjing University of Aeronautics and Astronautics
Read the Original
This page is a summary of: Learning Projection Views for Sparse-View CT Reconstruction, October 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3503161.3548204.
You can read the full text:
Contributors
The following have contributed to this page