Power of a Test

Statistics
definition

Also known as: statistical power, 1 - β

Grade 9-12

View on concept map

The probability that a hypothesis test correctly rejects a false null hypothesis. Before conducting a study, researchers perform a power analysis to determine how large a sample they need.

Definition

The probability that a hypothesis test correctly rejects a false null hypothesis. Power = P(\text{reject } H_0 \mid H_0 \text{ is false}) = 1 - \beta, where \beta is the probability of a Type II error.

💡 Intuition

Power is your test's ability to detect a real effect when one exists. A test with high power is like a sensitive metal detector—it won't miss a coin buried in the sand. A test with low power is like searching with your eyes—you'll miss things that are actually there. You want power to be high (typically 0.80 or above).

🎯 Core Idea

Four factors affect power: (1) sample size n—larger is more powerful, (2) significance level \alpha—larger \alpha gives more power but more Type I errors, (3) true effect size—bigger effects are easier to detect, (4) variability—less noise means more power.

Example

A drug truly lowers blood pressure by 5 mmHg. With n = 30 and \alpha = 0.05, the power might be 0.65. This means there's a 65\% chance the study will detect the effect and a 35\% chance it will miss it. \text{Increase to } n = 100: \text{power} \approx 0.95

Formula

\text{Power} = 1 - \beta = P(\text{reject } H_0 \mid H_0 \text{ is false})

Notation

Power = 1 - \beta. \beta = P(\text{Type II error}).

🌟 Why It Matters

Before conducting a study, researchers perform a power analysis to determine how large a sample they need. A study with low power is a waste of resources—it's unlikely to find the effect even if it's real.

Formal View

\text{Power} = 1 - \beta = P(\text{reject } H_0 \mid H_a \text{ true}) where \beta = P(\text{Type II error})

See Also

🚧 Common Stuck Point

Students confuse power with the p-value. Power is calculated BEFORE the study (planning stage) and depends on the true effect size. The p-value is calculated AFTER data collection.

⚠️ Common Mistakes

  • Thinking power is the probability that H_0 is false—power is the probability of detecting a false H_0, which assumes H_0 IS false.
  • Forgetting that power depends on the true parameter value—you need to specify an alternative to compute power.
  • Believing you can increase power without trade-offs—increasing \alpha raises power but also raises the Type I error rate. Only increasing n improves power without a downside.

Frequently Asked Questions

What is Power of a Test in Math?

The probability that a hypothesis test correctly rejects a false null hypothesis. Power = P(\text{reject } H_0 \mid H_0 \text{ is false}) = 1 - \beta, where \beta is the probability of a Type II error.

Why is Power of a Test important?

Before conducting a study, researchers perform a power analysis to determine how large a sample they need. A study with low power is a waste of resources—it's unlikely to find the effect even if it's real.

What do students usually get wrong about Power of a Test?

Students confuse power with the p-value. Power is calculated BEFORE the study (planning stage) and depends on the true effect size. The p-value is calculated AFTER data collection.

What should I learn before Power of a Test?

Before studying Power of a Test, you should understand: type i type ii errors, hypothesis testing, sampling distribution.

How Power of a Test Connects to Other Ideas

To understand power of a test, you should first be comfortable with type i type ii errors, hypothesis testing and sampling distribution.