What is it about?

Phishing emails increasingly rely on social engineering, which involves language that persuades, pressures, or manipulates recipients, rather than relying on obvious technical tricks. A key difficulty is that the same tactic can appear in very different ways: sometimes it is clear and direct, and sometimes it is woven into normal-looking wording and context. This poster introduces a structured way to describe and analyse social engineering in phishing emails. First, we define a three-level complexity framework that captures how a tactic can be expressed from more overt forms to more concealed ones. Second, we present a tactic-controlled workflow that produces phishing emails with a target tactic and then extracts the trigger phrases, the specific words or sentences that express the tactic in the message. Finally, these trigger phrases are organised into fine-grained linguistic patterns for six common tactics: Authority, Liking, Reciprocity, Commitment & Consistency, Social Proof, and Scarcity. Overall, the poster provides a clear vocabulary and a practical process for examining where social-engineering tactics appear in phishing text and how they are expressed at different levels of subtlety.

Featured Image

Why is it important?

Phishing defenses often focus on technical indicators (links, domains, attachments) or broad awareness tips, but many successful attacks rely on psychological manipulation embedded in the text. As generative AI lowers the cost of producing convincing messages, social-engineering content becomes easier to scale, personalise, and refine, while remaining difficult to assess in a transparent way. This work is important because it offers a structured and explainable way to reason about social engineering in phishing. Instead of treating “social engineering” as a single label, we separate tactic complexity into three levels and connect each tactic to concrete trigger phrases and linguistic patterns. This makes the problem measurable and actionable: security teams can build training content with graduated difficulty, researchers can benchmark how detection performance degrades as manipulation becomes subtler, and practitioners can design more interpretable systems that point to textual evidence rather than black-box scores. Most importantly, our results highlight a practical warning: subtle manipulation remains a blind spot, even when automated tools perform well on obvious cases. That gap matters for both workforce training and AI-assisted security workflows, where over-trust in “good-looking” emails can lead to real-world compromise.

Perspectives

In practice, “spotting phishing” is often taught as a checklist exercise, such as looking for typos, strange links, or urgent language. But the cases that worry me most are the ones that feel normal: messages that use everyday politeness, familiarity, or reasonable requests to steer a recipient into a small mistake. That motivation shaped this poster. Rather than asking whether a model is “smart,” we wanted to understand when subtle manipulation stops being reliably recognisable, and what text cues actually carry the tactic. The trigger-phrase extraction is especially valuable to me, because it turns an abstract concept (“this feels manipulative”) into something people can discuss, critique, and improve. My hope is that this line of work helps bridge research and practice: creating phishing examples that are realistic enough for training, while also providing explanations that are concrete enough for defenders to trust and act on

YICUN TIAN
Swinburne University of Technology

Read the Original

This page is a summary of: Poster: Decoding Social Engineering: A Multi-Level Framework for Tactic Generation, Annotation, and Evaluation, November 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3719027.3760745.
You can read the full text:

Read

Contributors

The following have contributed to this page