Skip to main content

Adversarial Attacks on Genotype Sequences

Daniel Mas Montserrat (Stanford University); Alexander Ioannidis (Stanford University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

Adversarial attacks can drastically change the output of a method by small alterations to its input. While this can be a useful framework to analyze worst-case robustness, it can also be used by malicious agents to damage machine learning-based applications. The proliferation of platforms that allow users to share their DNA sequences and phenotype information to enable association studies has led to an increase in large genomic databases. Such open platforms are, however, vulnerable to malicious users uploading corrupted genetic sequence files that could damage downstream studies. Such studies commonly include steps involving the analysis of the genomic sequences' structure using dimensionality reduction techniques and ancestry inference methods. In this paper we show how white-box gradient-based adversarial attacks can be used to corrupt the output of genomic analyses, and we explore different machine learning techniques to detect such manipulations.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00