I used to be really
happy with any

*p*-value smaller than .05, and very disappointed when*p*-values turned out to be higher than .05. Looking back, I realize I was suffering from a bi-polar*p*-value disorder. Nowadays, I interpret*p*-values more evenly. Instead of a polar division between*p*-values above and below the .05 significance level, I use a gradual interpretation of*p*-values. As a consequence, I'm no longer very convinced something is going on by*p*-values between .02 and .05. Let me explain.
In
my previous blogpost, I explained how

*p*-values can be calibrated to provide best-case posterior probabilities that the H0 was true. High*p*-values leave quite something to be desired, with a*p*= .05 yielding a best-case scenario with a 71% probability that H1 is true (assuming H0 and H1 are a-priori equally likely). Here, I want to move beyond best case scenario’s. Instead of only looking at*p*-values, we are going to look at the likelihood that a*p*-value represents a true effect, given the power of a statistical test.
This blog post is
still based on the paper by Sellke, Bayarri, & Berger, 2001. The power of a statistical test that yields a
specific

*p*-value is determined by the size of the effect, the significance level, and the size of the sample. The more observations, and the larger the effect size, the higher the statistical power. The higher the statistical power, the higher the likelihood of observing a small (e.g.,*p*= .01) compared to a high (e.g.,*p*= .04)*p*-value, assuming there is a true effect in the population. We can see this in the figure below. The top and bottom halves of the figure display the same information, but the scale showing the percentage of expected*p*-values differs (from 0-100 in the top, from 0-10 in the bottom, where the percentages for p-values between .00 and .01 are cut off at .1). As the top pane illustrates, the probability of observing a*p*-value between 0.00 and 0.01 is more than twice as large if a test has 80% power, compared to when the test has only 50% power. In an extremely high powered experiment (e.g., 99% power) the*p*-value will be smaller than .01 in approximately 96% of the tests, and between 0.01 and 0.05 in only 3.5% of the tests.
In general, the higher
the statistical power of a test, the less likely it is to observe relatively
high

*p*-values (e.g.,*p*> .02). As can be seen in the lower pane in the figure, in extremely high powered statistical tests (i.e., 99% power), the probability of observing a*p*-value between .02 and .03 is less than 1%. If there is no real effect in the population, and the power of the statistical test is 0% (i.e., there is no chance to observe a real effect),*p*-values are uniformly distributed. This means that every*p*-value is equally likely to be observed, and thus that 1% of the*p*-values will fall within the .02 and .03 interval. As a consequence, when a test with extremely high statistical power returns a*p*= .024, this outcome is*more*likely when the null hypothesis is true, than when the alternative hypothesis is true (the bars for a*p*-value between .02 and .03 is higher when power = 0%, than when power = 99%). In other words, a statistical difference at the*p*< .05 level is surprising, assuming the null-hypothesis is true, but should still be interpreted as support for the null-hypothesis (we also explain this in Lakens & Evers (2014).
The fact that with increasing
sample size, a result can at the same time be a statistical difference with

*p*< .05, while also being stronger support for the null-hypothesis than for the alternative hypothesis, is known as Lindley’s paradox. This isn’t a true paradox – things just get more interesting to people if you call them a paradox. There are just two different questions that are asked. First, the probability of the data, assuming the null-hypothesis is true, or Pr(D|H0), is very low. Second, the probability of the alternative hypothesis, is lower than the probability of the null-hypothesis, given the data, or Pr(H1|D)<Pr(H0|D). Although it is often interpreted by advocates of Bayesian statistics as a demonstration of the ‘illogical status of significance testing’ (Rouder, Morey, Verhagen, Province, & Wagemakers, in press), it also an illustration of the consequences of using improper priors in Bayesian statistics (Robert, 2013).
An extension of these ideas is now more widely known in
psychology as

*p*-curve analysis (Simonsohn, Nelson, & Simmons, 2014, see www.p-curve.com). However, you can apply this logic (with care!) when subjectively evaluating single studies as well. In a well-powered study (with power = 80%) the odds of a statistical difference yielding a*p*-value smaller than .01 compared to a statistical difference between .01 and .05 is approximately 3 to 1. In general, the lower the*p*-value, the more the result supports the alternative hypothesis (but don't interpret*p*-values directly as support for H0 or H1, and always consider the prior probability of H0). Nevertheless, 'sensible*p*-values are related to weights of evidence' (Good, 1992), and the lower the*p*-value the better. A*p*-value for a true effect can be higher than .03, but it's relatively unlikely to happen a lot across multiple studies, especially when sample sizes are large. In small sample sizes, there is a lot of variation in the data, and a relatively high percentage of higher*p*-values is possible (see the figure for 50% power). Remember that if studies only have 50% power, there should also be 50% non-significant findings.
The statistical reality explained above also means that in
high-powered studies (e.g., with a power of .99, for example when you collect
400 participants (divided over 2 conditions in an independent

*t*-test) and the effect size is*d*=.43), setting the significance level to .05 is not very logical. After all,*p*-values > .02 are not even more likely under the alternative hypothesis, than under the null-hypothesis. Unlike my previous blog, where subjective priors were involved, this blog post is focused on a the objective probability of observing*p*-values under the null hypothesis and the alternative hypothesis, as a function of power. It means that we need to stop using fixed significance levels of α = .05 for all our statistical tests, especially now that we are starting to collect larger samples. As Good (1992) remarks:*The real objection to*p

*-values is not that they usually are utter nonsense, but rather that they can be highly misleading, especially if the value of N is not also taken into account and is large.’*

How we can decide which significance
level we should use, depending on our sample size, is a topic for a future blog
post. With which I mean to say I haven't completely figured out how it should be done. If you have, I'd appreciate a comment below.