Analysis Of The Novel Transformer Module Combination For Scene Text Recognition
Yeon-Gyu Kim, Hyunsu Kim, Minseok Kang, Hyug-Jae Lee, Rokkyu Lee, Gunhan Park
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:05
Various methods for scene text recognition (STR) are proposed every year. These methods dramatically increase the performance of the existing STR field; however, they have not been able to keep up with the progress of general-purpose research in image recognition, detection, speech recognition, and text analysis. In this paper, we evaluate the performance of several deep learning schemes for the encoder part of the Transformer in STR. First, we change the baseline feed forward network (FFN) module of encoder to squeeze-and- excitation (SE)-FFN or cross stage partial (CSP)-FFN. Second, the overall architecture of encoder is replaced with local dense synthesizer attention (LDSA) or Conformer structure. Conformer encoder achieves the best test accuracy in various experiments, and SE or CSP-FFN also showed competitive performance when the number of parameters is considered. Visualizing the attention maps from different encoder combinations allows for qualitative performance analysis.