What is it about?

AI tools can now write emails that look professional, natural, and convincing. This creates a new cyber security risk because attackers can use these tools to produce phishing emails that are harder for people and email systems to recognise. This publication explores how well major email services, including Gmail, Outlook, and Yahoo, detect phishing emails generated using AI. The study created AI-generated phishing emails and tested whether they were blocked or delivered to inboxes. The paper also looks at whether writing-style patterns can help detect these emails. Instead of relying only on suspicious links, sender reputation, or known phishing domains, the study examines features such as urgency, sentence structure, punctuation, politeness, pronoun use, and command words like “click”, “verify”, or “download”. The findings show that AI-generated phishing emails can bypass some existing filters, but stylometric analysis combined with machine learning can provide a promising extra layer of protection.

Featured Image

Why is it important?

This work is unique because it studies both sides of the problem: how current email providers respond to AI-generated phishing emails, and how these emails can be detected using their writing style. Many phishing studies focus on older datasets or traditional phishing signals, but this work directly examines phishing emails generated by a modern AI model. The work is timely because large language models are now widely available and can be used by attackers with limited technical skill. This lowers the barrier for creating persuasive phishing emails at scale. As AI-generated attacks become more realistic, traditional spam filters may need stronger methods that can analyse the language of the message itself. The difference this work could make is practical. It shows that features such as imperative verbs, clause density, urgency markers, and pronoun patterns can help identify AI-generated phishing. This could support the development of more transparent and explainable email security tools, especially for detecting new phishing attempts that do not yet appear in known threat databases.

Perspectives

For me, this publication is important because it addresses a fast-moving cyber security concern: the way generative AI can be misused to make phishing more convincing and easier to produce. Phishing has always relied on language, trust, urgency, and persuasion, so it is essential that detection methods pay closer attention to how phishing messages are written. What I find especially meaningful about this work is its practical relevance. It does not only ask whether AI-generated phishing is possible; it tests how real email systems respond and then explores a transparent way to improve detection. This reflects my wider interest in building AI-aware cyber security tools that are explainable, realistic, and useful for protecting people and organisations from emerging threats.

Dr Chidimma Opara
Teesside University

Read the Original

This page is a summary of: Evaluating spam filters and Stylometric Detection of AI-generated phishing emails, Expert Systems with Applications, June 2025, Elsevier,
DOI: 10.1016/j.eswa.2025.127044.
You can read the full text:

Read

Contributors

The following have contributed to this page