What is it about?

Machine learning (ML) models generally get better when they are trained on more data. However, data privacy regulations on fields such as healthcare make it difficult for data owners (e.g. hospitals) to collaborate with other data owners and collaboratively train ML models for better performance. Protocols such as Split Learning make it possible for the data owners to collaboratively train models by eliminating raw data sharing, but in doing this they might also introduce privacy risks. Our work is an attempt to protect Split Learning clients against a certain class of effective attacks targeting their data privacy, hence increasing the overall security of the system.

Featured Image

Read the Original

This page is a summary of: SplitGuard, November 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3559613.3563198.
You can read the full text:

Read

Contributors

The following have contributed to this page