Fiveable

๐ŸŽฃStatistical Inference Unit 7 Review

QR code for Statistical Inference practice questions

7.2 Type I and Type II Errors, Power of a Test

๐ŸŽฃStatistical Inference
Unit 7 Review

7.2 Type I and Type II Errors, Power of a Test

Written by the Fiveable Content Team โ€ข Last updated September 2025
Written by the Fiveable Content Team โ€ข Last updated September 2025
๐ŸŽฃStatistical Inference
Unit & Topic Study Guides

Hypothesis testing isn't perfect. We can make two types of mistakes: rejecting a true null hypothesis (Type I error) or failing to reject a false one (Type II error). These errors are like two sides of a seesaw โ€“ as one goes down, the other tends to go up.

The probability of a Type I error is our chosen significance level, usually 0.05. Type II error probability is trickier to calculate and depends on factors like sample size and effect size. We aim for high test power (correctly rejecting a false null) while keeping Type I errors in check.

Understanding Errors in Hypothesis Testing

Type I and II errors

  • Type I Error (False Positive) occurs when rejecting null hypothesis despite being true with probability $\alpha$ (significance level)
  • Type II Error (False Negative) happens when failing to reject false null hypothesis with probability $\beta$
  • Errors mutually exclusive as one decreases other tends to increase

Probability of error types

  • Type I Error probability equals chosen significance level $\alpha$ (typically 0.05 or 0.01)
  • Type II Error probability calculated as $1 - \text{Power}$ depends on effect size, sample size, and significance level
  • Calculation steps involve determining critical value based on $\alpha$, finding rejection region, and calculating Type II error probability using sampling distribution under alternative hypothesis

Errors vs test power

  • Power measures probability of correctly rejecting false null hypothesis ($1 - \beta$)
  • Increasing $\alpha$ boosts power but creates trade-off with Type I error
  • Power and Type II error complement each other as power rises, Type II error falls
  • Aim for high power while maintaining acceptable Type I error rate

Factors affecting test power

  • Effect size larger effect sizes boost power (small vs large treatment effect)
  • Sample size bigger samples increase power (30 vs 300 participants)
  • Significance level higher $\alpha$ increases power but raises Type I error risk (0.05 vs 0.01)
  • Data variability lower variability enhances power (homogeneous vs heterogeneous groups)
  • Boost power by:
    1. Enlarging sample size
    2. Using one-tailed tests when appropriate
    3. Improving measurement precision
    4. Increasing significance level cautiously
    5. Employing matched pairs or repeated measures designs
    6. Conducting a priori power analysis for optimal sample size