Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:23
21 Sep 2021

Recent works have shown that neural networks are vulnerable to carefully crafted adversarial examples (AE). By adding small perturbations to original images, AEs are able to deceive victim models, and result in incorrect outputs. Research work in adversarial machine learning started to focus on the detection of AEs in autonomous driving applications. However, existing studies either use simplifying assumptions on the outputs of object detectors or ignore the tracking system in the perception pipeline. In this paper, we first propose a novel similarity distance metric for object detection outputs in autonomous driving applications. Then, we bridge the gap between the current AE detection research and the real-world autonomous systems by providing a temporal AE detection algorithm, which takes the impact of tracking system into consideration. We perform evaluations on Berkeley Deep Drive and CityScapes datasets, by using different white-box and black-box attacks, which show that our approach outperforms the mean-average-precision and mean intersectionover-union based AE detection baselines by significantly increasing the detection accuracy.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00