What is it about?

Trolling and social bots have been proven as powerful tactics for manipulating the public opinion and sowing discord among Twitter users. This effort requires substantial content fabrication and account coordination to evade Twitter's detection of nefarious platform use. In this paper we explore an alternative tactic for covert social media interference by inducing misperceptions about genuine, non-trolling content from verified users. This tactic uses a malware that covertly manipulates targeted words, hashtags, and Twitter metrics before the genuine content is presented to a targeted user in a covert man-in-the-middle fashion. Early tests of the malware found that it is capable of achieving a similar goal as trolls and social bots, that is, silencing or provoking social media users to express their opinion in polarized debates on social media. Following this, we conducted experimental tests in controlled settings (N = 315) where the malware covertly manipulated the perception in a Twitter debate on the risk of vaccines causing autism. The empirical results demonstrate that inducing misperception is an effective tactic to silence users on Twitter when debating polarizing issues like vaccines. We used the findings to propose a solution for countering the effect of the malware-induced misperception that could also be used against trolls and social bots on Twitter.

Featured Image

Why is it important?

he concept of "malware-induced misperception" is inspired by trolling, election meddling, and active measures, but replaces the need for fabricating content or infusing inflammatory tweets and comments. The malware elevates worries that the social media platform can detect a misperception campaign. Instead, the misperception takes place on a local machine or smartphone where the malware covertly rearranges words, endorsement actions (e.g. likes, or shares), and topic keywords (e.g. hashtags) of a genuine social media post while the targeted user is reading it in real time. By targeting genuine content, the malware allows the targeted individual to verify the facts and the credibility of a source, thus bypassing all conventional cues people use to detect "phishy" content The goal of the malware is to covertly manipulate the data in transit and induce interpretation of genuine content towards the objective of the malicious actor, that is, achieve the trolling, medaling, and active measures goals in a clandestine and more targeted fashion.

Perspectives

The article introduces an important perspective on social media manipulation, that is, malicious actors interested in polluting the conversation on Twitter have the option to do it with malware that manipulated a legitimate content instead of fabricating content like they did in the past (e.g. troll farms during the US Elections in 2016 and Brexit). This allows for clandestine and contextually relevant social media user targeting, forcing users to misperceive events/facts/information, distorting their map of reality and allowing an attacker to achieve a psychological domination.

Dr. Filipo Sharevski
DePaul University

Read the Original

This page is a summary of: To tweet or not to tweet, July 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3407023.3407025.
You can read the full text:

Read

Contributors

The following have contributed to this page