Fiveable

๐ŸŽฃStatistical Inference Unit 5 Review

QR code for Statistical Inference practice questions

5.2 Properties of Point Estimators: Unbiasedness and Consistency

๐ŸŽฃStatistical Inference
Unit 5 Review

5.2 Properties of Point Estimators: Unbiasedness and Consistency

Written by the Fiveable Content Team โ€ข Last updated September 2025
Written by the Fiveable Content Team โ€ข Last updated September 2025
๐ŸŽฃStatistical Inference
Unit & Topic Study Guides

Point estimators are crucial tools in statistical inference. They help us estimate population parameters from sample data. Understanding their properties, like unbiasedness and consistency, is key to making accurate inferences about populations.

Unbiased estimators give us expected values equal to true parameters, while consistent estimators converge to true values as sample sizes grow. These properties ensure our estimates are reliable and improve with more data, forming the backbone of statistical analysis.

Properties of Point Estimators

Concept of unbiased estimators

  • Unbiased estimator produces expected value equal to true population parameter $E[\hat{\theta}] = \theta$ ($\hat{\theta}$ estimator, $\theta$ true parameter)
  • Bias measures difference between estimator's expected value and true parameter $Bias(\hat{\theta}) = E[\hat{\theta}] - \theta$
  • Unbiasedness crucial for accurate population parameter estimation on average reduces systematic errors (overestimation, underestimation)

Proving estimator unbiasedness

  • Prove unbiasedness:
    1. Express estimator using random variables
    2. Calculate estimator's expected value
    3. Demonstrate expected value equals true parameter
  • Utilize linearity of expectation $E[aX + bY] = aE[X] + bE[Y]$ and properties of random variable sums/averages
  • Unbiased estimators include sample mean for population mean, sample variance with n-1 denominator for population variance

Consistency in point estimation

  • Consistent estimator converges in probability to true parameter as sample size increases $\lim_{n \to \infty} P(|\hat{\theta}_n - \theta| < \epsilon) = 1$ for any $\epsilon > 0$
  • Consistency ensures improved accuracy with larger samples provides asymptotic estimation guarantees
  • Unbiasedness and consistency relationship:
    • Not mutually inclusive
    • Estimators can be unbiased but inconsistent (sample variance for mean)
    • Estimators can be consistent but biased (maximum likelihood estimators)
    • Some estimators exhibit both properties (sample mean for population mean)

Determining estimator consistency

  • Probability limit (plim) represents value sequence of random variables converges to in probability $plim_{n \to \infty} X_n = X$
  • Prove consistency:
    1. Express estimator in terms of sample size n
    2. Calculate estimator's probability limit
    3. Show probability limit equals true parameter
  • Apply law of large numbers, continuous mapping theorem, Slutsky's theorem for plim calculations
  • Consistent estimators include sample mean for population mean, maximum likelihood estimators (under specific conditions)