What is it about?

Automated program repair is gaining attention in the software engineering community. Automation shows promise in reducing bug fixing costs. However, many developers express reluctance about accepting machine-generated patches into their codebases. We design and conduct an eye-tracking study investigating how developers perceive trust as a function of code provenance (i.e., author or source). We systematically vary provenance while controlling for patch quality. In our study of ten participants, overall visual code scanning and the distribution of attention differed across identical code patches labeled as human- vs. machine-written. Participants looked more at the source code for human-labeled patches and looked more at tests for machine-labeled patches. Participants judged human-labeled patches to have better readability and coding style. However, participants were more comfortable giving a critical task to an automated program repair tool.

Featured Image

Why is it important?

This is the first work in software engineering that investigates the perception of trust and its implication on developers' code fixing behavior. We find that there are significant differences in code review behavior based on trust as a function of patch provenance. Our results may inform the subsequent design and analysis of automated repair techniques to increase developers’ trust and, consequently, their deployment.

Read the Original

This page is a summary of: Trustworthiness Perceptions in Code Review, October 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3382494.3422164.
You can read the full text:

Read

Contributors

The following have contributed to this page