What is it about?

Software is increasingly parallel to meet the growing demand for performance in modern applications. However, the benefits of this additional parallelism comes at the cost of high complexity and non-deterministic concurrency bugs. In this paper, we introduce a technique for automatically testing the exponentially large number of different instruction-interleavings possible in concurrent software programs. We model our approach after the biased-random search technique known as "greybox fuzzing," which has been widely successful in finding thousands of bugs in sequential software. For this work, we adapt a commonly used fuzzing algorithm to operate on the schedule of a program's interleavings, rather than its inputs. To reduce the size of the search space of these interleavings, we leverage equivalence classes to divert the search away from interleavings that are behaviourally equivalent to those already explored. As the search in greybox fuzzing is adaptive it quickly gravitates towards portions of the search space which leads to bugs.

Featured Image

Why is it important?

In this work, we show that our choice of equivalence class --the reads-from relation-- is empirically an effective way to partition the space of program interleavings on two leading benchmark suites, including real-world C and C++ concurrent programs. We also find that an adaptive, biased-random search, in the form of a typical fuzzing loop, explores this space more evenly than prior, unbiased randomized methods. As fuzzing has shown to be a practical method for testing program inputs with strong industry adoption, we believe that our approach has the potential to be the basis for practical methods that ensure the reliability of real-world concurrent software.

Perspectives

Today, fuzzing is the dominant state of the practice for automatically testing sequential programs. This work is the first to extend a fuzzing approach to the domain of program schedules, rather than program inputs. Given the broad appeal of fuzzing among developers, we see this as a first step to bringing automated testing of concurrent programs to a wider audience. Similar to sequential fuzzing methods, we hope that this and future works can strike a balance between effectiveness, usability and scalability for testing concurrent software.

Dylan Wolff
National University of Singapore

Read the Original

This page is a summary of: Greybox Fuzzing for Concurrency Testing, April 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3620665.3640389.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page