What is it about?

This research shows how we can make AI mental health tools more transparent and trustworthy. Instead of giving a simple yes/no prediction, we use multiple AI agents that act like a therapist, a patient, and a clinical expert. They have a conversation based on official DSM-5 questions and then explain step-by-step how they reached a diagnosis. This makes the process easier to understand, easier to audit, and safer to use. The system also allows us to generate synthetic mental health conversations without using real patient data, helping researchers train and evaluate AI models while protecting privacy.

Featured Image

Why is it important?

AI is rapidly entering mental healthcare, but most current systems are “black box” and difficult to trust, especially when decisions affect people’s wellbeing. Our work introduces a transparent way for AI to explain how it arrives at a mental health diagnosis, instead of just giving an answer. This is important because trust, accountability, and safety are essential in clinical settings. By generating realistic synthetic data without exposing real patients, this work also opens new opportunities for safer research, faster innovation, and more reliable mental health AI systems.

Perspectives

As a researcher, I felt that too many AI mental health systems focused only on accuracy, and not enough on transparency or trust. I wanted to explore how we can design AI that doesn’t just make predictions, but actually shows its reasoning in a way real clinicians and patients can understand. For me, this paper represents a step towards AI systems that are more responsible, more accountable, and more aligned with how mental health care should operate in the real world. My hope is that this work inspires future research that prioritizes safety and explainability, not just performance.

Mithat Ozgun
Vrije Universiteit Amsterdam

Read the Original

This page is a summary of: Trustworthy AI Psychotherapy: Multi-Agent LLM Workflow for Counseling and Explainable Mental Disorder Diagnosis, November 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3746252.3761164.
You can read the full text:

Read

Contributors

The following have contributed to this page