What is it about?
This paper is about building Machine Learning algorithms in a robust way. Robustness here is against various types of failures like bogus software, hardware defects, and hacked machines (i.e., adversarial attack). The algorithm presented in this paper gives an idea about how to tolerate such failures and complete the learning task safely.
Featured Image
Why is it important?
Our lives nowadays are (to a big extent) controlled by Machine Learning (ML) algorithms including recommender systems on social media, face detection on mobile phones, and self-driving cars. For this, we must leave very little room for failures in these algorithms. This paper addresses exactly this problem, which is vital to today's usage of ML.
Read the Original
This page is a summary of: Genuinely Distributed Byzantine Machine Learning, July 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3382734.3405695.
You can read the full text:
Resources
Contributors
The following have contributed to this page