What is it about?

This research tackles an important privacy challenge in federated learning (FL), a method where multiple devices collaboratively train a shared machine learning model without sharing their private data. Although FL helps protect individual data, recent studies show that attackers can use a type of machine learning model called Generative Adversarial Networks (GANs) to guess the overall patterns of private datasets, potentially recreating images or other sensitive data. To address this, we developed a defense strategy called Anti-GAN. The key idea is to trick the attacker's GAN by altering the visual details of private images in a way that keeps them useful for training the shared model but makes them unrecognizable if an attacker tries to reconstruct them. Our method works by generating fake images using a GAN on the user's side and blending them with real images to train the FL model. This makes it harder for the attacker's GAN to learn meaningful patterns about the user's data. Through extensive testing on popular datasets like MNIST and CelebA, we show that Anti-GAN effectively protects against these attacks without significantly reducing the accuracy of the trained model. In short, our approach allows federated learning to remain secure and reliable, even in the face of sophisticated privacy attacks.

Featured Image

Read the Original

This page is a summary of: Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning, ACM Transactions on Knowledge Discovery from Data, February 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3719350.
You can read the full text:

Read

Contributors

The following have contributed to this page