What is it about?
In our research, we sought to develop an automated social network abuse detection system which is able to reduce the attack surface of its users, by reducing the number of, or isolating friends predicted to be perceived as potential attack vectors. This presents a substantial challenge as adversaries leverage social network friend relationships to collect sensitive data from users and target them with abuse that includes fake news, cyberbullying, malware, and propaganda. We leverage these findings to develop AbuSniff (Abuse from Social Network Friends), a system that evaluates, predicts, and protects users against perceived friend abuse by suggesting several personalized defensive actions for such friends. We began by developing the first-ever mobile app questionnaire, that can detect perceived strangers and friend abusers. To replace the questionnaire, we then introduced mutual Facebook activity features that have a statistically significant overall association with the AbuSniff decision and showed that they can train supervised learning algorithms to predict questionnaire responses using 10-fold cross-validation. We trained our system with several supervised learning algorithms, including Random Forest (RF), Decision Trees (DT), SVM, PART, SimpleLogistic, MultiClassClassifier, K-Nearest Neighbors (KNN) and Naive Bayes and chose the best performing algorithm for predicting each of the questionnaire questions. Our approach provides a method to evaluate AbuSniff system through online experiments with participants recruited from the crowdsourcing site from 25 countries across 6 continents. Results showed that the questionnaire-based AbuSniff was significantly more efficient than control in terms of participant willingness to unfriend, sandbox and restrict abusive friends. The predictive version of AbuSniff was highly accurate (F-Measure up-to 97.3%) in predicting strangers or abusive friends and participants agreed to take the AbuSniff suggested actions in 78% of the cases. When compared to a control app, AbuSniff significantly increased the participant self-reported willingness to reject invitations from strangers and abusers, their awareness of friend abuse implications, and their perceived protection from friend abuse.
Featured Image
Photo by Solen Feyissa on Unsplash
Why is it important?
This work is of great importance because it contributes to the enhancement of cybersecurity and privacy for Internet users, including those using mobile devices and participating in geosocial networks. Much of this work has concerned cyberbullying and related phenomena, which are sadly common in the United States and around the world. This trend is evidenced by a report from the Pew Research Center, which states that more than half of Americans have endured cyberbullying of one sort or another, thereby establishing the pertinence of this work in some of the nation’s most common and pertinent online and web-based challenges.
Perspectives
Abuse from Social Network Friends (AbuSniff) is to be used on the social media platform Facebook, where it identifies friends that it deems to be either strangers or abusive in some fashion. It then stops them from contacting users by restricting their access, unfollowing them, unfriending them, or sandboxing them, which is a cybersecurity term that means they will be isolated from potential targets.
Dr. Sajedul Talukder
Edinboro University
Read the Original
This page is a summary of: A Study of Friend Abuse Perception in Facebook, ACM Transactions on Social Computing, October 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3408040.
You can read the full text:
Resources
Contributors
The following have contributed to this page







