Savgan: Self-Attention Based Generation Of Tumour On Chip Videos
Sandeep Manandhar, Irina Veith, Maria Carla PARRINI, Auguste Genovesio
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:03:10
Generation of videomicroscopy sequences will become increasingly important in order to train and evaluate dynamic image analysis methods. The latter are crucial to the study of biological dynamic processes such as tumour-immune cell interactions. However, current generative models de- veloped in the context of natural image sequences employ either a single 3D (2D+time) convolutional neural network (CNN) based generator, which fails to capture long range interactions, or two separate (spatial and temporal) genera- tors, which are unable to faithfully reproduce the morphology of moving objects. Here, we propose a self-attention based generative model for videomicroscopy sequences that aims to take into account for the full range of interactions within a spatio-temporal volume of 32 frames. To reduce the compu- tational burden of such a strategy, we consider the Nyström approximation of the attention matrix. This approach leads to significant improvements in reproducing the structures and the proper motion of videomicroscopy sequences as assessed by a range of existing and proposed quantitative metrics.