Multimodal microscopy image alignment using spatial and shape information and a branch-and-bound algorithm
Shuonan Chen (Columbia University); Bovey Y Rao (Columbia University); Stephanie Herrlinger (Columbia University); Attila Losonczy (Columbia University); Liam Paninski (Department of Statistics, Columbia University); Erdem Varol (Columbia University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Multimodal microscopy experiments that image the same population of cells under different experimental conditions have become a widely used approach in systems and molecular neuroscience. The main obstacle is to align the different imaging modalities to obtain complementary information about the observed cell population (e.g., gene expression and calcium signal). Traditional image registration methods perform poorly when only a small subset of cells are present in both images, as is common in multimodal experiments. We cast multimodal microscopy alignment as a cell subset matching problem. To solve this non-convex problem, we introduce an efficient and globally optimal branch-and-bound algorithm to find subsets of point clouds that are in rotational alignment with each other. In addition, we use complementary information about cell shape and location to compute the matching likelihood of cell pairs in two imaging modalities to prune the optimization search tree further. Finally, we use the maximal set of cells in rigid rotational alignment to seed image deformation fields to obtain a final registration result. Our framework performs better than the state-of-the-art histology alignment approaches regarding matching quality and is faster than manual alignment, providing a viable solution to improve the throughput of multimodal microscopy experiments.