What is it about?
This study explores how different types of trust in AI affect how people work and think with AI. Surveying over 400 interns, we found that trusting AI’s capabilities and emotional support can boost creativity and problem-solving, while focusing too much on AI’s decision-making details may reduce deeper thinking. These findings suggest that balancing trust types can improve human-AI collaboration and innovation.
Featured Image
Photo by A Chosen Soul on Unsplash
Why is it important?
AI is becoming a key partner in work and decision-making, but its success depends on human trust. Understanding which types of trust help or harm thinking can guide companies, schools, and policymakers to design AI systems and training that truly enhance innovation and collaboration.
Perspectives
As AI becomes more embedded in daily work, building the right kind of trust is crucial. Blind trust can lead to overreliance, while constant questioning can slow progress. This study highlights how balancing functional, emotional, and transparent trust can unlock the full potential of human–AI collaboration.
Weizheng Jiang
Wuhan University of Science and Technology
Read the Original
This page is a summary of: Understanding dimensions of trust in AI through quantitative cognition: Implications for human-AI collaboration, PLOS One, July 2025, PLOS,
DOI: 10.1371/journal.pone.0326558.
You can read the full text:
Contributors
The following have contributed to this page







