What is it about?
This commentary documents AI-mediated suicides from 2023 to 2025, examines the failure of voluntary platform safety measures, and argues conversational AI represents a distinct and underregulated mental health risk category.
Featured Image
Photo by Zach M on Unsplash
Why is it important?
People are dying in preventable ways while regulators debate jurisdiction and companies resist oversight. This paper connects individual tragedies to the policy failures driving them and makes the case that accountability cannot wait.
Perspectives
As an independent researcher and licensed social worker, I wrote this because the cases I documented kept me up at night. These were real people failed by systems designed to help them. That demands a response louder than a journal article, but this is where it starts.
Dr. Keith Robert Head
Independent Researcher
Read the Original
This page is a summary of: Digital companions, real casualties: A commentary on rising AI-related mental health crises, Current Research in Psychiatry, January 2026, ProBiologists LLC,
DOI: 10.46439/psychiatry.6.043.
You can read the full text:
Contributors
The following have contributed to this page







