What is it about?
Since the introduction of multicore computers, programs can run multiple activities simultaneously; this is called concurrency. Concurrency is inherently complex, and often leads to difficult bugs. Researchers have come up with many different "concurrency models": programming techniques that provide a way to introduce parallelism in a program, while constraining it in a way that avoids certain bugs. Many concurrency models have been developed, suited for different scenarios. In this paper, we argue that typical programs can benefit from using different concurrency models, in each component using whichever model fits best. Unfortunately, these concurrency models have largely been developed to be used separately. We show that, when different models are combined, the guarantees they normally provide suddenly no longer hold. We then solve this by developing Chocola: a unified framework of three concurrency models (futures, transactions, and actors) that maintains the guarantees of all three models wherever possible. We explain the semantics of Chocola, provide an implementation, and demonstrate that it can improve the performance of three benchmark programs for relatively little effort from the programmer.
Photo by Olivier Collet on Unsplash
Why is it important?
Since the introduction of multicore processors, concurrency has been an important aspect of programs. Further increases in the number of cores per processor, as well as the shift to distributed architectures, has made it even more relevant to exploit the hardware's ever increasing computational power. However, parallel programming is notoriously difficult, leading to bugs that are hard to detect and complex to fix. Hence, it is an important challenge to find ways to introduce parallelism – to exploit the hardware maximally – while maintaining safety guarantees – to make programming these systems easy.
Read the Original
This page is a summary of: Chocola, ACM Transactions on Programming Languages and Systems, January 2021, ACM (Association for Computing Machinery), DOI: 10.1145/3427201.
You can read the full text:
The following have contributed to this page