What is it about?
Software systems that use machine learning (ML) can produce undesirable outcomes, including making unfair or unsafe predictions and decisions. In this paper, we introduce a software toolkit that enables developers to create safer and less biased ML-enabled software.
Featured Image
Photo by Possessed Photography on Unsplash
Why is it important?
The capabilities of ML systems are advancing at a remarkable rate. It is critical that all ML systems are implemented responsibly, but even for relatively simple systems, doing so is difficult work. A lot of industry focus has been on transparency, but diagnosing bad behavior in ML systems is only one piece of the problem. The other piece is ensuring that ML systems do not misbehave in the first place. In high-risk applications, it is often not ethical or safe to deploy a system without high confidence that it will not misbehave. This is a difficult problem, and addressing it traditionally requires teams of ML experts dedicated specifically to these issues. Our software toolkit gives software developers, who may not have much ML experience, the tools they need to ensure that their systems are safe and fair with high confidence.
Read the Original
This page is a summary of: Seldonian Toolkit: Building Software with Safe and Fair Machine Learning, May 2023, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.1109/icse-companion58688.2023.00035.
You can read the full text:
Contributors
The following have contributed to this page







