What is it about?
Android apps often ask for permission to access sensitive data like your contacts, location, or camera. Over time, Android introduced “One‑Time Permissions” (OTPs) – a feature that is supposed to give an app access only once, until you close the app. But is that really secure? In this work, we built a computer‑assisted method to check the security of Android’s permission system. Because the system is complex, we used Large Language Models (the technology behind ChatGPT) to help read Android’s official documentation and source code. The AI helped us translate the permission rules into a precise mathematical model. Then we used a “model checker” (a tool that exhaustively explores all possible behaviors) to verify whether the rules actually guarantee security. We tested Android 6 (older) and Android 11 (newer, with OTPs). Our method found a known vulnerability in Android 6 – confirming our approach works. More importantly, it discovered a new flaw in Android 11’s One‑Time Permissions, which we call Permanent Permission Access (PPA). This flaw allows a malicious app to get permanent access to a sensitive resource (e.g., location) even after the app is closed, bypassing the “one‑time” promise. We wrote a real exploit app and showed that the vulnerability still exists in recent Android versions, including Android 15. Finally, we propose a simple fix to close this security hole.
Featured Image
Photo by حامد طه on Unsplash
Why is it important?
First formal study of One‑Time Permissions – Previous research on Android permissions focused on older Android versions. OTPs were introduced to improve privacy but had never been rigorously verified using formal methods. Our work closes that gap. A new, persistent vulnerability (PPA) – The flaw we discovered is not just theoretical. Our working exploit runs on Android 15, the latest version at the time of writing. This means millions of active devices remain vulnerable. The vulnerability allows an app with no special privileges to convert a one‑time permission into a permanent one – a clear violation of user expectations and Android’s own security model. Demonstrates a practical AI‑assisted formal method – Using Large Language Models to help extract behavior from documentation and code is a novel, semi‑automated approach. It can be reused for other complex systems (e.g., iOS permissions, cloud access controls), making formal verification faster and more accessible to researchers and practitioners. Actionable fix – We don’t just report the problem; we provide a concrete, implementable solution. This can directly inform Android’s security patches and help protect user data.
Perspectives
What excites me most about this work is the synergy between two very different worlds: the fluid, pattern‑matching power of large language models and the exhaustive, mathematically precise nature of model checking. Initially, I was skeptical that an LLM could reliably extract formal rules from Android’s sprawling documentation and source code. But by carefully designing a chain‑of‑thought process, we were able to guide the AI to produce a clean, verifiable model – saving months of manual effort. When the model checker then flagged a violation in the One‑Time Permission logic, I honestly did not believe it at first. We double‑checked, wrote the exploit, and to my surprise, the app worked perfectly on a stock Android 15 device. That moment – seeing a real phone grant permanent access when it should have been only one‑time – was both alarming and deeply satisfying. It confirmed that our hybrid human‑AI‑formal approach is not just an academic exercise; it finds real, dangerous bugs that have escaped detection for years. I hope this work encourages others to combine modern AI with classical verification techniques, making our digital systems safer without requiring heroic manual effort.
Behrouz TorkLadani
University of Isfahan
Read the Original
This page is a summary of: A Formal Lens on Android Permissions System: Modeling, Verification, and Exploitation Using LLMs and Model Checking, ACM Transactions on Privacy and Security, February 2026, ACM (Association for Computing Machinery),
DOI: 10.1145/3799897.
You can read the full text:
Contributors
The following have contributed to this page







