What is it about?
Are you interested in understanding the effectiveness of performance testing in software systems? Last week, I had the privilege of presenting our latest research at the 28th International Conference on Evaluation and Assessment in Software Engineering (EASE 2024). Our paper, "An Empirical Study on Code Coverage of Performance Testing," explores the often-overlooked aspects of performance testing, including code coverage and execution time. We analyzed 28 open-source systems to shed light on these crucial factors.
Featured Image
Photo by Lukas Blazek on Unsplash
Why is it important?
This research opens up new avenues for improving performance testing practices and addressing challenges in test generation. I'm excited to continue exploring this field and contributing to more efficient software development processes. Key findings: Performance tests achieve significantly lower code coverage than functional tests There's a notable trade-off between coverage and execution time Automated test generation methods may face challenges in ensuring affordable performance testing
Perspectives
Our team's work on this study showed us how complex performance testing can be. We learned that getting more code coverage often means tests take longer to run. This is a big problem in software performance testing. We now see that making performance testing better isn't just about covering more code. It's about finding a good mix of coverage and execution time. This research has made us want to look into new ways to make performance tests that are both thorough and fast. We think this could really change how software teams do performance testing in the future.
Muhammad Imran
University of L'Aquila, Italy
Read the Original
This page is a summary of: An Empirical Study on Code Coverage of Performance Testing, June 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3661167.3661196.
You can read the full text:
Contributors
The following have contributed to this page







