Fiveable

๐ŸŽฒData, Inference, and Decisions Unit 3 Review

QR code for Data, Inference, and Decisions practice questions

3.2 Sample size determination and power analysis

๐ŸŽฒData, Inference, and Decisions
Unit 3 Review

3.2 Sample size determination and power analysis

Written by the Fiveable Content Team โ€ข Last updated September 2025
Written by the Fiveable Content Team โ€ข Last updated September 2025
๐ŸŽฒData, Inference, and Decisions
Unit & Topic Study Guides

Sample size determination and power analysis are crucial for designing effective studies. They help researchers figure out how many participants they need to detect meaningful effects and avoid wasting resources on underpowered studies.

These techniques consider factors like desired significance level, expected effect size, and statistical power. By carefully calculating sample sizes and analyzing power, researchers can ensure their studies are robust and capable of answering important questions reliably.

Sample Size Calculations

Fundamentals of Sample Size Determination

  • Calculate sample size requirements to ensure adequate statistical power and precision in study design
  • Factor in desired level of significance (ฮฑ), power (1-ฮฒ), expected effect size, and population variability when determining sample size
  • Adjust calculations based on specific statistical tests (t-tests, ANOVA, regression, chi-square)
    • T-tests consider one-tailed or two-tailed, paired or independent samples, and hypothesized group differences
    • ANOVA accounts for number of groups, expected group differences, and within-group variability
    • Regression analyses factor in number of predictors, expected R-squared value, and desired coefficient estimate precision
    • Chi-square tests base calculations on category numbers and expected cell proportions
  • Utilize specialized software and online calculators for complex sample size determinations in various study designs (GPower, R packages)

Sample Size Calculation Examples

  • T-test sample size calculation:
    • Two-tailed independent samples t-test
    • Desired power: 0.80
    • Significance level (ฮฑ): 0.05
    • Expected effect size (Cohen's d): 0.5
    • Calculated sample size: 64 participants per group
  • ANOVA sample size calculation:
    • One-way ANOVA with 4 groups
    • Desired power: 0.85
    • Significance level (ฮฑ): 0.05
    • Expected effect size (f): 0.25
    • Calculated total sample size: 232 participants
  • Regression sample size calculation:
    • Multiple regression with 5 predictors
    • Desired power: 0.90
    • Significance level (ฮฑ): 0.01
    • Expected R-squared: 0.20
    • Calculated sample size: 149 participants

Power Analysis for Studies

Fundamentals of Power Analysis

  • Quantify probability of correctly rejecting false null hypothesis, aiming for power of 0.80 or higher
  • Perform a priori power analysis before data collection to determine required sample size for desired statistical power
  • Conduct post hoc power analysis after data collection to assess achieved power and interpret non-significant results
  • Consider effect size, significance level (ฮฑ), sample size, and desired power (1-ฮฒ) in power analysis calculations
  • Utilize common effect size measures in power analysis (Cohen's d, eta-squared, odds ratios) for different statistical tests
  • Employ widely used tools for power analyses (GPower, R packages like 'pwr') across various statistical tests
  • Perform sensitivity analysis in power calculations to explore impact of input parameter changes on required sample size or achievable power

Power Analysis Examples

  • T-test power analysis:
    • Two-tailed independent samples t-test
    • Sample size: 100 participants per group
    • Significance level (ฮฑ): 0.05
    • Effect size (Cohen's d): 0.4
    • Calculated power: 0.87
  • ANOVA power analysis:
    • One-way ANOVA with 3 groups
    • Total sample size: 180 participants
    • Significance level (ฮฑ): 0.05
    • Effect size (f): 0.25
    • Calculated power: 0.92
  • Regression power analysis:
    • Multiple regression with 4 predictors
    • Sample size: 120 participants
    • Significance level (ฮฑ): 0.01
    • Effect size (fยฒ): 0.15
    • Calculated power: 0.83

Sample Size, Effect Size, and Power

Relationships and Tradeoffs

  • Increase statistical power with larger sample sizes, larger effect sizes, and higher significance levels (ฮฑ)
  • Observe inverse relationship between effect size and sample size when maintaining constant power
    • Smaller effect sizes require larger sample sizes for detection
    • Larger effect sizes allow for smaller sample sizes while maintaining power
  • Illustrate power changes as a function of sample size for given effect size and significance level using power curve
  • Decrease minimum detectable effect size as sample size increases, enabling detection of smaller effects
  • Recognize diminishing returns in increased power beyond certain sample size point (point of diminishing returns)
  • Decrease Type II error rate (ฮฒ) as power increases, reflecting improved ability to detect true effects
  • Consider practical significance alongside statistical significance when interpreting effect size and sample size relationship

Examples of Sample Size, Effect Size, and Power Interactions

  • T-test example:
    • Fixed power: 0.80
    • Significance level (ฮฑ): 0.05
    • Effect size (Cohen's d) variations:
      • Large effect (d = 0.8) requires 26 participants per group
      • Medium effect (d = 0.5) requires 64 participants per group
      • Small effect (d = 0.2) requires 394 participants per group
  • ANOVA example:
    • Fixed sample size: 150 total participants
    • Significance level (ฮฑ): 0.05
    • Effect size (f) variations:
      • Large effect (f = 0.40) yields power of 0.99
      • Medium effect (f = 0.25) yields power of 0.81
      • Small effect (f = 0.10) yields power of 0.24

Adjusting Sample Size for Attrition

Accounting for Participant Loss

  • Define attrition as loss of participants during study and non-response as missing data or incomplete responses
  • Calculate adjusted sample size by dividing initial sample size by (1 - expected attrition rate)
  • Estimate expected attrition or non-response rates using historical data, pilot studies, or similar research
  • Recognize varying attrition patterns in different study types (longitudinal, clinical trials, surveys) requiring specific adjustment strategies
  • Implement over-sampling technique to compensate for expected losses and ensure adequate statistical power at study conclusion
  • Apply stratified sampling to adjust sample sizes within specific subgroups with different attrition or non-response rates
  • Conduct sensitivity analyses to assess impact of different attrition scenarios on study power and results

Attrition Adjustment Examples

  • Longitudinal study example:
    • Initial required sample size: 200 participants
    • Expected attrition rate: 20%
    • Adjusted sample size: 200 / (1 - 0.20) = 250 participants
  • Clinical trial example:
    • Initial required sample size: 300 participants
    • Expected attrition rates:
      • Control group: 15%
      • Treatment group: 25%
    • Adjusted sample sizes:
      • Control group: 150 / (1 - 0.15) = 176 participants
      • Treatment group: 150 / (1 - 0.25) = 200 participants
  • Survey research example:
    • Initial required sample size: 1000 respondents
    • Expected non-response rate: 30%
    • Adjusted sample size: 1000 / (1 - 0.30) = 1429 survey invitations