What is it about?
Artificial Intelligence (AI) is increasingly used to make important decisions in areas like banking, healthcare, and hiring. But AI models can sometimes be biased, which means they may treat people unfairly — for example, giving different outcomes to men and women. Our research asks: Can we design AI systems that are both accurate and fair?
Featured Image
Why is it important?
As AI continues to shape decisions that affect people’s lives, it’s crucial that these systems are not only smart but also fair. Our research shows that fairness doesn’t have to come at the cost of performance. This is a step towards AI you can trust — systems that are accurate, transparent, and equitable.
Perspectives
For me, this project was more than just a technical challenge. It was about confronting one of the biggest ethical dilemmas in AI: how to make systems that serve everyone fairly. Working on this research reminded me that behind every dataset are real people whose lives can be affected by the outputs of an algorithm. Achieving even a modest improvement in fairness without sacrificing accuracy feels like a meaningful contribution towards more responsible AI. I believe this work is a small but important step toward ensuring that AI is not only powerful, but also humane.
Muhammad Adil Raja
Dundalk Institute of Technology
Read the Original
This page is a summary of: Multi-Objective Fairness Approach Using Causal Bayesian Networks & Grammatical Evolution, July 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3712255.3726716.
You can read the full text:
Contributors
The following have contributed to this page







