Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 04:59
24 Sep 2020

The uncontrolled growth in domains such as a surveillance system, health care, and finance produce a large amount of data and contain potentially sensitive data that can become public if they are not appropriately sanitized. Motivated by this issue, we introduce a privacy filter (PF), a novel non-negative matrix factorization (NMF) framework which aims to preserve the privacy of data before publishing. More specifically, this framework enables data holders to choose the data dimension that protects user privacy without being aware of the privacy leakage. Also, we consider the problem of privately learning a PF across multiple sensitive datasets, leading to a federated learning algorithm guaranteeing the protection of private data and high accuracy classification for non-private information. Finally, the experiments conduct and illustrate the superior performance of the proposed algorithms under the premise of protecting users’ private data.