What is it about?
Learning to program has always been challenging. Students consistently struggle with debugging, program design, understanding abstract concepts, and navigating version control. With the sudden rise of tools like ChatGPT, programming education entered a new and uncertain phase: would AI support students’ learning, or introduce new difficulties? This study offers a rare, now-unrepeatable comparison between two cohorts of programming students: one from just before ChatGPT’s public release and another from its first year of widespread use. Using a consistent self-reported survey instrument, the work examines students’ perceived programming learning difficulties, the role of different learning materials (including ChatGPT in the second group), and how perceptions of trust, fairness, and effectiveness relate to learning challenges. The findings show that while foundational challenges in programming persist, students in the ChatGPT era report reduced difficulty in several complex topics, such as recursion, arrays and containers, and functional programming. At the same time, traditional static materials like lecture notes and university-produced videos were perceived as more difficult to learn from, suggesting a shift toward interactive, adaptive tools. Students who trusted and frequently used ChatGPT also reported lower learning difficulties, but concerns about plagiarism and fairness remain significant.
Featured Image
Photo by Varun Yadav on Unsplash
Why is it important?
As AI becomes embedded in everyday learning, educators, institutions, and curriculum designers face urgent questions: How should programming curricula evolve? What role should AI play? What challenges persist, and for whom? This study provides actionable insights at a pivotal moment in computing education. It offers evidence that AI tools like ChatGPT can support understanding of complex concepts, but only when integrated thoughtfully alongside foundational instruction and robust ethical guidance. It highlights the need for personalised learning pathways, the development of metacognitive skills, and differentiated support for novice and advanced learners. The work also shows that students’ perceptions of trust, fairness, and ethical concerns meaningfully influence how they engage with AI tools. These findings can inform institutional AI policies, educator training, and the design of future educational technologies. Because the study captures a unique moment right before and after ChatGPT’s introduction, it offers a baseline for understanding how generative AI is reshaping programming education.
Perspectives
What excites me most about this study is that it captures a turning point in programming education, a moment when generative AI first entered students’ learning environments. Even as AI tools offer new forms of personalised and interactive support, the core challenges of learning to program remain deeply tied to conceptual understanding and independent problem-solving. The findings show that AI can enhance learning, but only when paired with strong pedagogy and clear ethical guidance. I hope this work supports educators, curriculum designers, and policymakers as they navigate the opportunities and complexities of integrating AI into teaching. Ultimately, generative AI should empower learners without replacing the foundational skills that define true programming competence.
Mireilla Bikanga Ada
University of Glasgow
Read the Original
This page is a summary of: Programming Challenges and Perceptions: A Study of Separate Groups Before and After the Release of ChatGPT, Digital Threats Research and Practice, November 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3777904.
You can read the full text:
Contributors
The following have contributed to this page







