What is it about?

This work is a review of research (published before November 2018) on academic integrity violations that are most relevant to computer science courses in higher education institutions, like plagiarism and illegitimate collaboration (collusion) in programming assignments and in programming exams. In order to organize and critically analyze the surveyed literature, a well-studied framework from the field of fraud deterrence, named the "Fraud Triangle", was used. The basic premise of this framework is that fraudulent behavior is affected by three elements that form the three sides of the triangle: a "pressure" to commit fraud, an "opportunity" to commit fraud, and an internal "rationalization" process to morally justify the fraudulent behavior. While this framework may sound simple and intuitive, it provides invaluable insights for understanding the landscape of research on plagiarism and other academic integrity violations in programming assessments. It also sheds light on aspects of the problem that are rarely given attention in the literature.

Featured Image

Why is it important?

- The review tried to focus on aspects in these papers that are specific to programming assessments, rather than aspects that are relevant to academic integrity in general. - The review is systematic, so it draws a clear picture of the research landscape on the topic until late 2018 and allows the coverage and results of the work to be evaluated by the readers. - The review uses a framework from the field of fraud deterrence, which is not a field that is traditionally used by researchers on academic integrity in computer science, which brings new perspectives. - The use of the Fraud Triangle enabled providing a clear and intuitive framework for practitioners to assess areas that contribute to more academic dishonesty in their computing courses and address them. - The review found most research to focus on reducing the "opportunity" to cheat much more on understanding the type of pressure in computer courses caused by the nature of learning how to program and the typical teaching and assessment methods used in these courses. The review calls for more work in this area. - The review found the largest amount of the reviewed papers to discuss ways for reducing the opportunity to plagiarize, as well as tools for detecting plagiarism. However, there is a clear lack of empirical work evaluating the deterrent efficacy of these strategies and tools. The reviewed papers also included mentions of a wide range of rationalizations used by computing students when justifying plagiarism, the most important of which are rationalizations that stem from confusion about what constitutes plagiarism. Finally, work on the relationship between pressure in computing courses and plagiarism was found to be very scarce and incommensurate with the significant contribution of this factor to plagiarism.

Read the Original

This page is a summary of: Plagiarism in Programming Assessments, ACM Transactions on Computing Education, February 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3371156.
You can read the full text:

Read

Contributors

The following have contributed to this page