What is it about?

The Hypothesis tests many researchers use have fixed cutoffs for rejecting a given hypothesis in a "statistically significant" way. Our tests, based on a more-general form of the theorem used by the most-commonly used hypothesis tests, end up with cutoffs that depend on sample size, and are therefore harder to "fool" with large samples than the more-widely used tests.

Featured Image

Why is it important?

Our tests are much harder to "fool" with large samples than both Neyman-Pearson tests and Jeffreys's Bayes-factor tests. Our tests, unlike other hypothesis tests in wide use, can be used with "big data." Even so, the way of thinking about the tests is very similar to the way many researchers have learned to think about their research, in terms of rejecting "null hypotheses."

Read the Original

This page is a summary of: Blending Bayesian and Classical Tools to Define Optimal Sample-Size-Dependent Significance Levels, The American Statistician, March 2019, Taylor & Francis,
DOI: 10.1080/00031305.2018.1518268.
You can read the full text:

Read

Contributors

The following have contributed to this page