What is it about?
Machine learning models sometimes fail in hidden or unexpected ways, especially when used to analyze images. These failures can happen more often in certain subsets of data—what experts call “slices”—but finding and understanding these slices usually requires extra information like labels, metadata, or model explanations. This can be expensive or even impossible in real-world settings. Our system, AttributionScanner, helps people identify and explore these problematic slices without needing any extra metadata. It works by visually summarizing how an AI model makes decisions, helping users spot patterns like biased predictions or mislabeled images. The system provides an interactive interface that allows people to see which parts of an image influenced the model’s decision, group similar images together, and fix problems like spurious correlations or confusing labels. This makes it much easier to test, validate, and improve machine learning models—especially for sensitive applications like healthcare, autonomous driving, or surveillance—where understanding why a model fails is just as important as how often it does.
Featured Image
Photo by Andy Kelly on Unsplash
Why is it important?
Most tools for debugging machine learning models depend on structured metadata or expensive explanations, which many real-world datasets simply don’t have. Our method is one of the first to enable metadata-free, visual, human-in-the-loop debugging of model failures in complex vision tasks. AttributionScanner is especially important today because AI models are widely deployed but often behave unpredictably. Our approach empowers researchers and practitioners to find and fix model weaknesses without retraining the model or requiring additional labels, making it practical, lightweight, and scalable. It brings explainability to the forefront—turning black-box models into something humans can interact with, inspect, and trust.
Perspectives
Working on this project has been a deeply rewarding experience. One of our key goals was to bridge the gap between AI explainability and real-world usability—especially in domains where extra data is hard to come by. We wanted to empower people, not just algorithms, to diagnose and fix models. Personally, it was exciting to design a system where meaningful visualization truly drives understanding and improvement, rather than being an afterthought. I hope this work helps more researchers embrace data-centric, human-in-the-loop tools for safer and more trustworthy AI.
Xiwei Xuan
University of California Davis
Read the Original
This page is a summary of: AttributionScanner: A Visual Analytics System for Model Validation with Metadata-Free Slice Finding, IEEE Transactions on Visualization and Computer Graphics, January 2025, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.1109/tvcg.2025.3546644.
You can read the full text:
Contributors
The following have contributed to this page







