Computational Imaging In 3D X-Ray Microscopy: Reconstruction, Image Segmentation And Time-Evolved Experiments
Sridhar Niverty, Hamid Torbatisarraf, Viktor Nikitin, Vincent De Andrade, Stanislau Niauzorau, Natalia Kublik, Bruno Azeredo, Aniket Tekawade, Francesco De Carlo, Nikhilesh Chawla
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:22:09
Time-dependent x-ray microscopy is an excellent technique to develop a fundamental mechanistic understanding of material behavior. Computational imaging plays a critical role in XRM, in a variety of ways. 2D projections are acquired and the resulting datasets are reconstructed using a filtered back projection algorithm. Several imaging artifacts are typically present, such as beam hardening, misalignment of the data, drift during time-evolved experiments (particularly at high temperatures and/or nanometer resolution scans), etc. Minimizing and removing these artifacts is, thus, very important. This is all the more important, because image segmentation is then done to quantify the statistics of the microstructure (often, as a function of time). The efficiency and accuracy of image segmentation is directly proportional to the quality of the initial reconstructed data. Thus, there is a need to develop efficient, robust algorithms that can handle large datasets obtained by 4D, time-dependent x-ray microscopy. In this paper, we will describe the challenges associated with computation imaging during x-ray microscopy. The use of Convolutional Neural Network (CNN) architectures based on a deep learning approach, as means of automating and handling x-ray microscopy data sets, in both lab-scale and synchrotron, will be discussed. The use of CNN techniques to robustly process ultra-large volumes of data in relatively small-time frames can exponentially accelerate tomographic data analysis, opening up novel avenues for performing 4D characterization experiments.