What is it about?
Smartphones running Android are used by billions of people, but they are constantly threatened by malicious apps that can steal data, spy on users, or damage devices. Many modern security systems use Artificial Intelligence to detect these apps, but they often work like “black boxes”: they give an answer without explaining why. In this paper, we introduce ANAKIN, a new AI system that not only detects Android malware with high accuracy, but also explains its decisions. Instead of just looking at lists of app actions, our method builds a network that shows how different parts of an app interact with each other. This allows the AI to recognize suspicious patterns more effectively. We also use an explanation technique that highlights which parts of an app’s behavior caused it to be classified as harmful. This helps security analysts understand what the malware is actually doing and why an alert was raised. Tests on more than 26,000 real Android apps show that our approach is both more accurate and more transparent than existing methods, making malware detection more reliable and easier to trust.
Featured Image
Photo by Michael Geiger on Unsplash
Why is it important?
Android malware is growing in volume and sophistication, and security teams increasingly rely on Artificial Intelligence to keep users safe. However, many AI-based detection systems behave like black boxes: they may be accurate, but they do not explain why an app is considered dangerous. This makes it harder for experts to trust the system, understand new threats, or fix mistakes. What makes our work unique is that it combines high-performance malware detection with clear explanations. By representing app behavior as a network and using explainable AI, our approach shows not only which apps are malicious, but also which actions inside the app are responsible. This is especially timely as regulators and industry are demanding more transparent and accountable AI systems. This can make a real difference in practice: security analysts can better understand new malware, identify the root causes of missed detections, and respond faster and more confidently to cyber threats. In the long run, this leads to more trustworthy security tools and better protection for millions of Android users.
Perspectives
Working on this paper was especially rewarding because it brought together different strands of our research: cybersecurity, graph-based machine learning, and explainable AI. We were not only interested in building a model that performs well, but in creating something that security analysts could actually understand and trust. We were particularly excited by how the graph representation of app behavior allowed us to uncover patterns that are hard to see with more traditional approaches. Seeing the explanations point to meaningful API interactions, and even help us understand why some malware was missed, convinced us that explainable AI can play a truly practical role in cybersecurity, not just a theoretical one. We hope this work encourages more researchers and practitioners to look beyond accuracy alone and to consider transparency and interpretability as essential parts of building effective and responsible security systems.
Prof. Donato Malerba
Universita degli Studi di Bari Aldo Moro
Read the Original
This page is a summary of: Anakin: explainable android malware detection with graph neural networks, Cybersecurity, February 2026, Springer Science + Business Media,
DOI: 10.1186/s42400-026-00552-z.
You can read the full text:
Contributors
The following have contributed to this page







