What is it about?

This study looks at how university students use information from ChatGPT when writing academic texts, and whether they can tell when the information is right or wrong. ChatGPT is increasingly used in education and often sounds confident and authoritative. However, it can also produce misleading or incorrect information, which means students need to think carefully before trusting it. To understand how students handle this, we asked them to read two short articles about vitamin C and the common cold. One was written by a medical expert and contained only correct information. The other was written by ChatGPT and included a mix of accurate and inaccurate points. Students then wrote a short newspaper style article either on their own or with the help of ChatGPT. We examined how much correct and incorrect information appeared in their writing. We also measured how well students judged their own thinking. This ability is known as metacognitive accuracy and reflects how confidently and accurately someone can assess their own performance. In addition, we explored students' epistemic beliefs which include how much they trust different sources of knowledge, how they evaluate information, and whether they feel it needs to be checked. The results show that students who used ChatGPT while writing included more correct information overall, both from the expert text and from ChatGPT's own wording. However, using ChatGPT did not make them include more incorrect information. Instead, mistakes were more strongly linked to students' own thinking habits. Students who were more overconfident were more likely to copy incorrect ideas from ChatGPT. Students who trusted the ChatGPT text too much or felt less certain about the expert text were also more likely to include errors. This suggests that while ChatGPT can support writing, good judgment, critical thinking, and healthy scepticism remain essential. The study helps us understand how students work with human written and AI generated information and highlights why learning to evaluate sources is still important in education.

Featured Image

Why is it important?

This work is timely because generative AI tools are rapidly becoming part of everyday learning, yet we know surprisingly little about how students combine AI generated content with trusted human sources. Many existing studies focus on how well ChatGPT performs, but very few examine how humans actually use its information in real writing tasks. Our study is one of the first experimental investigations to compare expert written and AI generated texts and to analyse which personal factors lead students to accept correct or incorrect information. These insights matter because education systems are still figuring out how to integrate AI safely and effectively. The findings caution against assuming that AI support automatically leads to better learning. Instead, the results show that students' metacognitive skills and epistemic beliefs play a central role. This evidence can guide educators, universities, and policymakers when designing teaching strategies, assessments, and AI literacy programmes. Our work contributes to a growing understanding of when AI can help and when students need stronger critical evaluation skills to avoid incorporating misleading information.

Read the Original

This page is a summary of: “ ChatGPT can make mistakes. Check important info.” Epistemic beliefs and metacognitive accuracy in students' integration of ChatGPT content into acad..., British Journal of Educational Technology, April 2025, Wiley,
DOI: 10.1111/bjet.13591.
You can read the full text:

Read

Contributors

The following have contributed to this page