- Home
- /
- Statistics
- /
- sampling design and inference
- /
- P-Value
The p-value is the probability of observing results at least as extreme as the actual data, calculated under the assumption that the null hypothesis is true. P-values are reported in virtually every scientific paper, clinical trial, and A/B test.
Definition
The p-value is the probability of observing results at least as extreme as the actual data, calculated under the assumption that the null hypothesis is true. A small p-value (typically below 0.05) suggests the observed data is unlikely under the null, providing evidence against it.
๐ก Intuition
P-value answers: 'If nothing special is really happening, how surprising is my data?' A tiny p-value (like 0.01) means your results would be very rare if the null were true - so maybe the null is wrong. A large p-value means your results aren't surprising under the null.
๐ฏ Core Idea
The p-value measures how surprising the observed data would be if the null hypothesis were true. A very small p-value suggests the null is implausible given the evidence.
Example
Notation
The p-value is denoted p. The significance level threshold is \alpha. We reject H_0 when p < \alpha.
๐ Why It Matters
P-values are reported in virtually every scientific paper, clinical trial, and A/B test. They are the standard way to quantify evidence in medicine, psychology, economics, and engineering, making them essential for data-driven decision-making.
๐ญ Hint When Stuck
When interpreting a p-value, first state the null hypothesis clearly. Then compare the p-value to your significance level \alpha (usually 0.05). Finally, if p < \alpha, reject the null and conclude the result is statistically significant; if p \geq \alpha, fail to reject the null.
Formal View
๐ง Common Stuck Point
The p-value is NOT the probability that the null hypothesis is true. It is the probability of seeing data this extreme IF the null hypothesis were already true.
โ ๏ธ Common Mistakes
- Thinking p-value is the probability the null is true (it's not)
- Treating p = 0.049 as meaningful but p = 0.051 as nothing
- Ignoring effect size and only looking at p-value
Frequently Asked Questions
What is P-Value in Statistics?
The p-value is the probability of observing results at least as extreme as the actual data, calculated under the assumption that the null hypothesis is true. A small p-value (typically below 0.05) suggests the observed data is unlikely under the null, providing evidence against it.
When do you use P-Value?
When interpreting a p-value, first state the null hypothesis clearly. Then compare the p-value to your significance level \alpha (usually 0.05). Finally, if p < \alpha, reject the null and conclude the result is statistically significant; if p \geq \alpha, fail to reject the null.
What do students usually get wrong about P-Value?
The p-value is NOT the probability that the null hypothesis is true. It is the probability of seeing data this extreme IF the null hypothesis were already true.
Prerequisites
Next Steps
How P-Value Connects to Other Ideas
To understand p-value, you should first be comfortable with hypothesis testing, probability basic and sampling distribution. Once you have a solid grasp of p-value, you can move on to statistical significance.