What is it about?
Artificial intelligence (AI) and machine learning are increasingly used in mental health research, for example to predict suicide risk or assess mental health conditions from speech patterns. However, many clinical journals do not have editors or reviewers with the expertise to properly evaluate these studies. As a result, important critiques of AI methods are sometimes dismissed as too technical or irrelevant. This can allow flawed models to be published, potentially affecting patient care. In this perspective, I describe real cases where methodological problems were overlooked, explain why this happens, and suggest changes to make peer review more rigorous and fair.
Featured Image
Why is it important?
This work highlights a critical gap in how clinical research journals evaluate AI studies. Editorial processes can allow flawed AI models to be published without proper scrutiny. This has direct consequences for patient safety and scientific integrity. The article is timely because AI is rapidly being integrated into healthcare, yet publishing practices have not kept pace. Recommendations include involving technically proficient reviewers and creating space for methodological critique. These steps can improve peer review and help maintain trust in research.
Perspectives
Working on this article gave me the opportunity to reflect on the unique position I hold as someone fluent in both AI and clinical publishing. Over the years, I have witnessed firsthand how complex computational models are often misunderstood or dismissed by traditional editorial processes in psychiatry journals. Writing this perspective was my attempt to articulate a systemic problem: that journals frequently lack the technical expertise needed to rigorously evaluate AI-driven studies, leaving critical methodological concerns unaddressed. I wanted to highlight not just the flaws in individual models but the structural barriers that prevent proper scrutiny, such as editorial pre-clearance and the marginalization of technically detailed critiques. For me, this work is both a warning and a roadmap for reform. It emphasizes that safeguarding scientific rigor and patient safety requires embedding AI expertise into the peer review and editorial process, rather than treating computational methods as peripheral or overly technical.
Assoc. Prof. Ezra N. S. Lockhart
National University
Read the Original
This page is a summary of: Expertise in AI and clinical publishing exposes peer review gaps: A perspective, Artificial Intelligence in Health, July 2025, Inno Science Press,
DOI: 10.36922/aih025210049.
You can read the full text:
Contributors
The following have contributed to this page







