Undersampled Dynamic Fourier Ptychography Via Phaseless Pca
Zhengyu Chen, Seyedehsara Nayer, Namrata Vaswani
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:16:08
Vision Transformers (ViT)-based models are witnessing an exponential growth in the medical imaging community. Among desirable properties, ViTs provide a powerful modeling of long-range pixel relationships, contrary to inherently local convolutional neural networks (CNN). These emerging models can be categorized either as hybrid-based when used in conjunction with CNN layers (CNN-ViT) or purely Transformers-based. in this work, we conduct a comparative quantitative analysis to study the differences between a range of available Transformers-based models using controlled brain tumor segmentation experiments. We also investigate to what extent such models could benefit from modality interaction schemes in a multi-modal setting. Results on the publicly-available BraTS2021 dataset show that hybrid-based pipelines generally tend to outperform simple Transformers-based models. in these experiments, no particular improvement using multi-modal interaction schemes was observed.