Skip to main content

An Efficient End-to-End Image Compression Transformer

Afsana Ahsan Jeny, Masum Shah Junayed, Md Baharul Islam

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:15:46
19 Oct 2022

Deep neural networks are vulnerable to maliciously crafted inputs called adversarial examples. Research on unprecedented adversarial attacks is significant since it can help strengthen the reliability of neural networks by alarming potential threats against them. However, since existing adversarial attacks disturb models unconditionally, the resulting adversarial examples increase their detectability through statistical observations or human inspection. To tackle this limitation, we propose hidden conditional adversarial attacks whose resultant adversarial examples disturb models only if the input images satisfy attackers' pre-defined conditions. These hidden conditional adversarial examples have better stealthiness and controllability of their attack ability. Our experimental results on the CIFAR-10 and ImageNet datasets show their effectiveness and raise a serious concern about the vulnerability of CNNs against the novel attacks.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00