What is it about?

We looked into a serious problem where mobile apps that seem normal can be misused by one person (an abuser) to harm someone else (a victim), like a partner or a bystander. The twist is that the harm doesn’t come from the app itself, but from how people use it against others. To deal with this, we introduce something called a misuse audit—a way to check if an app can be misused, even if we don’t have access to how the app was built. What we did: We studied reviews written by users in app stores. First, we trained a computational model to spot reviews that talk about spying or stalking experiences. Then, we used this model to find apps and features that are commonly misused. What we found: Both victims and abusers describe misuse in app reviews. Interestingly, abusers leave positive reviews, but still describe harmful behavior. In total, we found 156 apps that have features which can easily be misused. These features fall into four main types. Why it matters: Our method can help identify exploitable apps at scale, even without access to their code. This can help app stores and regulators to address harm before more damage is caused

Featured Image

Why is it important?

Our method can help identify exploitable apps at scale, even without access to their code. This can help app stores and regulators to address harm before more damage is caused

Read the Original

This page is a summary of: Understanding Mobile App Reviews to Guide Misuse Audits, Communications of the ACM, July 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3685528.
You can read the full text:

Read

Contributors

The following have contributed to this page