What is it about?
In an increasingly automated world, many important decisions such as those made in courtrooms, are already being influenced by algorithms. But are these decisions truly fair to everyone? In my recently published paper, we explored exactly this question: how can we ensure that artificial intelligence algorithms treat all individuals more equally? To address this, we developed a technique that helps make the data used to train these algorithms more balanced and representative, especially when it comes to groups that are often subject to discrimination. As a result, we were able to improve the fairness of algorithmic decisions without sacrificing the quality of the outcomes. The goal is simple, yet urgent: to make technology more ethical, fair, and responsible for everyone.
Featured Image
Photo by Tingey Injury Law Firm on Unsplash
Read the Original
This page is a summary of: Data Balancing for Mitigating Sampling Bias in Machine Learning, March 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3672608.3707891.
You can read the full text:
Contributors
The following have contributed to this page







