What is it about?
AI can write code, but when should developers trust it? We surveyed 25 practitioners and analyzed 88 studies to find what builds trust in AI for software engineering, revealing that accuracy, security, and clarity are key to creating reliable tools.
Featured Image
Why is it important?
The conversation on AI for code has moved from “can it work?” to “can we trust it?”. Uniquely, our research asks developers what builds their trust, finding that functional correctness and security are key. We offer a crucial roadmap for creating AI coding tools that are safe, reliable, and truly trustworthy for professional software development.
Perspectives
I believe the next leap in AI coding automation is blocked by one thing: trust. We can't automate what we can't trust. This curiosity drove my research to move beyond theory and understand what trust means in practice. This paper is my effort to pave the way for AI coding partners that are genuinely reliable, safe, and ready for the future of software engineering.
Dipin Khati
College of William and Mary
Read the Original
This page is a summary of: Mapping the Trust Terrain: LLMs in Software Engineering - Insights and Perspectives, ACM Transactions on Software Engineering and Methodology, October 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3771282.
You can read the full text:
Contributors
The following have contributed to this page







