What is it about?
Deep learning models called Convolutional Neural Networks (CNNs) are widely used in everything from image classification to self-driving cars. But understanding why these models behave the way they do—and comparing different models to see which one is better—is still a major challenge, especially for beginners. This paper introduces VAC-CNN, an interactive visual analytics system that helps users explore and compare the inner workings of multiple CNN models. Unlike most tools that only show performance metrics like accuracy, VAC-CNN reveals how different models make decisions, which parts of the image they focus on, and how those patterns differ between models. The system combines model explanation methods like Grad-CAM with visual summaries of performance, prediction consistency, and image statistics. It supports comparisons at different levels: comparing many models at once, comparing how two models behave on the same task, or examining one model in detail. Through a user-friendly web interface, VAC-CNN makes it easy for researchers, students, and developers to analyze CNN behaviors, identify key differences, and make more informed choices about which model to use or improve.
Featured Image
Photo by Ecliptic Graphic on Unsplash
Why is it important?
As CNNs grow more powerful and are used in more critical applications, simply knowing that a model performs well isn’t enough—we also need to know why it performs the way it does. Most existing tools only support side-by-side comparisons of two models and fail to explain internal decision-making clearly. VAC-CNN is one of the first systems to support comparative analysis across a large number of CNN models, offering both quantitative insights (like accuracy) and qualitative insights (like visual explanations and attention maps) in a cohesive interface. It also offers flexible customization, enabling users to explore different combinations of models, tasks, and explanation methods interactively. This makes VAC-CNN uniquely positioned to support transparent model selection, debugging, and education, especially for those without deep technical knowledge of CNN internals. By helping people visually grasp what models are doing and where they differ, it closes a crucial gap in the AI development pipeline.
Perspectives
Creating VAC-CNN was deeply rewarding—it brought together my interest in explainable AI, interactive design, and human-centered machine learning tools. I’ve seen how challenging it can be for practitioners to choose between different CNN models or understand their decisions. I wanted to build a tool that makes these powerful models more transparent, intuitive, and comparable—not just for experts, but for anyone working with deep learning. It was particularly exciting to support both large-scale comparisons and fine-grained visual explanations in a single platform. I hope VAC-CNN helps democratize model understanding and opens up new ways to teach, evaluate, and trust deep learning systems.
Xiwei Xuan
University of California Davis
Read the Original
This page is a summary of: VAC-CNN: A Visual Analytics System for Comparative Studies of Deep Convolutional Neural Networks, IEEE Transactions on Visualization and Computer Graphics, June 2022, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.1109/tvcg.2022.3165347.
You can read the full text:
Contributors
The following have contributed to this page







