What is it about?
Statistical testing is used in virtually all areas of human endeavor for rigorous comparisons of alternatives. Often, a company will heavily tune a product over time (whether a soft drink, a car, or a set of movie recommendations), so new alternatives will show at best only very slight improvements. We give a mathematical analysis of how efficiently some statistical tests operate in this setting, when the alternatives are very close together.
Photo by Oriol Pascual on Unsplash
Why is it important?
In general, there is no formula that gives the magic number of trials required to tell two alternatives apart with a certain level of certainty. Practitioners rely on expensive computer simulations to approximate this number, or simply keep running trials until a target confidence is attained. The current paper provides a formula in a specific setting: the alternatives must be close together, the trials must fall into some common patterns, and the noise in the tests must follow a particular well-studied form. If those conditions are met, the paper gives an exact number of trails for any target confidence, and gives clear comparisons between the power of different types of tests.
Read the Original
This page is a summary of: On the number of trials needed to distinguish similar alternatives, Proceedings of the National Academy of Sciences, July 2022, Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.2202116119.
You can read the full text:
The following have contributed to this page