Skip to main content

Polygon-Free: Unconstrained Scene Text Detection With Box Annotations

Weijia Wu, Enze Xie, Ruimao Zhang, Wenhai Wang, Ping Luo, Hong Zhou

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:22
04 Oct 2022

Machine learning models are vulnerable to data poisoning attacks whose purpose is to undermine the model?s integrity. However, the current literature on data poisoning attacks mainly focuses on ad hoc techniques that are generally limited to either binary classifiers or to gradient-based algorithms. To address these limitations, we propose a novel model-free label-flipping attack based on the multi-modality of the data, in which the adversary targets the clusters of classes while constrained by a label-flipping budget. The complexity of our proposed attack algorithm is linear in time over the size of the dataset. Also, the proposed attack can increase the error up to two times for the same attack budget. Second, a novel defense technique is proposed based on the Synthetic Reduced Nearest Neighbor model. The defense technique can detect and exclude flipped samples on the fly during the training procedure. Our empirical analysis demonstrates that (i) the proposed attack technique can deteriorate the accuracy of several models drastically, and (ii) under the proposed attack, the proposed defense technique significantly outperforms other conventional machine learning models in recovering the accuracy of the targeted mode.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00