What is it about?
Federated Learning (FL) is a way for organizations to work together on building powerful machine learning models without sharing their raw data. This approach has gained popularity in sensitive areas like healthcare, finance, and law, where privacy rules make it difficult to move data around. However, recent studies have shown that FL can still be vulnerable. In particular, a dishonest central server can secretly tamper with the training process to reconstruct private information from participants, essentially “stealing” their data. In this paper, we introduce OASIS, a defense strategy that protects against this type of attack while keeping model performance strong. Our approach uses the simple idea of adding variations to the training data (data augmentation) in a way that disrupts the attacker's ability to reverse-engineer private data. We also explain the underlying reasons these attacks work and identify the conditions that any defense must meet to be reliable. We test OASIS on a wide range of datasets, including both images (like ImageNet and CIFAR100) and text (like Wikitext, Stack Overflow, and Shakespeare). Across all of them, OASIS proves effective, showing it can be a practical and scalable solution to safeguard privacy in real-world FL applications.
Featured Image
Photo by A Chosen Soul on Unsplash
Why is it important?
As more of our personal information is used to train AI systems (medical records, financial transactions, or online conversations), it’s critical that these systems protect privacy. Federated Learning was designed with this in mind: instead of sending raw data to a central location, each participant keeps their data locally and only shares updates to the model. This setup is supposed to make data theft much harder. But in reality, the safeguards aren’t as strong as they appear. If the central server is dishonest, it can still manipulate the training process to secretly reconstruct participants’ private data. This means that sensitive information like someone’s medical history or personal messages could be exposed, even when using a system built to protect privacy. That’s where our work comes in. By creating OASIS, we show that it’s possible to defend against these attacks in a way that doesn’t slow down training or weaken the model. This is important because it means organizations in high-stakes fields like healthcare, law, finance, and more can benefit from advanced AI while still keeping sensitive information safe. In short, this work strengthens the foundation of Federated Learning by tackling one of its biggest hidden risks, making it more trustworthy and practical for real-world use.
Read the Original
This page is a summary of: Securing Federated Learning Against Active Reconstruction Attacks, ACM Transactions on Internet Technology, August 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3762639.
You can read the full text:
Contributors
The following have contributed to this page







