Asymptotic properties are crucial in econometrics, helping us understand how estimators behave as sample sizes grow. These properties, including consistency, asymptotic normality, and efficiency, provide a foundation for reliable statistical inference and hypothesis testing in large samples.
By studying asymptotic properties, we gain insights into the behavior of estimators like OLS, MLE, and GMM. This knowledge allows us to construct confidence intervals, perform hypothesis tests, and make informed decisions about which estimators to use in different scenarios.
Consistency of estimators
- Consistency is a key property of estimators in econometrics that ensures the estimates converge to the true population parameter as the sample size increases
- Consistent estimators are essential for drawing reliable inferences and making accurate predictions based on sample data
Convergence in probability
- Convergence in probability means that as the sample size grows, the probability of the estimator being close to the true parameter value approaches 1
- Mathematically, an estimator $\hat{\theta}n$ is consistent for the parameter $\theta$ if for any $\epsilon > 0$, $\lim{n \to \infty} P(|\hat{\theta}_n - \theta| < \epsilon) = 1$
- Intuitively, this implies that the estimator becomes more precise and less variable as more data is collected
Bias vs consistency
- Bias refers to the difference between the expected value of an estimator and the true parameter value, i.e., $Bias(\hat{\theta}) = E(\hat{\theta}) - \theta$
- An estimator can be biased but still consistent if the bias diminishes as the sample size increases
- Consistency is a more important property than unbiasedness because it guarantees that the estimator will eventually converge to the true value, even if it is initially biased
Conditions for consistency
- For an estimator to be consistent, it typically needs to satisfy certain regularity conditions
- These conditions include the correct specification of the model, the independence and identical distribution of the error terms, and the existence of finite moments
- Violating these conditions can lead to inconsistent estimators and misleading inferences (omitted variable bias, endogeneity)
Asymptotic normality
- Asymptotic normality is a fundamental property of many estimators in econometrics that allows for the construction of confidence intervals and hypothesis tests
- It states that as the sample size increases, the distribution of the estimator converges to a normal distribution
Central Limit Theorem
- The Central Limit Theorem (CLT) is the basis for asymptotic normality
- It states that the sum or average of a large number of independent and identically distributed random variables will be approximately normally distributed, regardless of the underlying distribution
- The CLT is crucial for deriving the asymptotic properties of estimators and test statistics
Asymptotic distribution of estimators
- Under certain regularity conditions, many estimators have an asymptotic normal distribution
- The asymptotic distribution is characterized by the true parameter value as the mean and the asymptotic variance, which depends on the sample size and the properties of the estimator
- Knowing the asymptotic distribution allows for the construction of confidence intervals and hypothesis tests
Wald statistics
- Wald statistics are used to test hypotheses about the parameters based on their asymptotic normal distribution
- They are calculated as the squared difference between the estimated parameter and the hypothesized value, divided by the asymptotic variance
- Wald statistics follow a chi-squared distribution under the null hypothesis, which enables the computation of p-values and critical values
Asymptotic efficiency
- Asymptotic efficiency is a desirable property of estimators that refers to their ability to achieve the lowest possible variance among all consistent estimators
- Efficient estimators provide the most precise estimates and the most powerful tests
Cramรฉr-Rao lower bound
- The Cramรฉr-Rao lower bound is the minimum variance that an unbiased estimator can achieve
- It serves as a benchmark for evaluating the efficiency of estimators
- The lower bound is derived from the Fisher information matrix, which measures the amount of information the data contains about the parameters
Efficient estimators
- An estimator is asymptotically efficient if its asymptotic variance equals the Cramรฉr-Rao lower bound
- Efficient estimators have the smallest possible variance among all consistent estimators
- Examples of asymptotically efficient estimators include the maximum likelihood estimator (under certain conditions) and the generalized method of moments estimator (with optimal weighting matrix)
Relative efficiency
- Relative efficiency compares the efficiency of two estimators by taking the ratio of their variances
- An estimator with a smaller variance is considered more efficient
- Relative efficiency can help choose between competing estimators (OLS vs. GLS) and determine the required sample size for a desired level of precision
Hypothesis testing with asymptotic results
- Asymptotic theory provides a framework for conducting hypothesis tests when the sample size is large
- These tests rely on the asymptotic distribution of the test statistics under the null and alternative hypotheses
Wald tests
- Wald tests are based on the asymptotic normal distribution of the estimators
- They compare the estimated parameter values to the hypothesized values, taking into account the asymptotic variance
- Wald tests are easy to compute but may have poor finite sample properties and can be sensitive to the parameterization of the model
Likelihood ratio tests
- Likelihood ratio (LR) tests compare the likelihood of the data under the null and alternative hypotheses
- They are based on the difference in the log-likelihood values between the restricted and unrestricted models
- LR tests have good asymptotic properties and are invariant to the parameterization, but they require estimating both the restricted and unrestricted models
Lagrange multiplier tests
- Lagrange multiplier (LM) tests, also known as score tests, evaluate the gradient of the log-likelihood function at the restricted parameter values
- They only require estimating the model under the null hypothesis, making them computationally attractive
- LM tests are particularly useful for testing hypotheses about a subset of parameters or for detecting misspecification (heteroskedasticity, serial correlation)
Confidence intervals with asymptotic results
- Confidence intervals provide a range of plausible values for the population parameters based on the sample estimates
- Asymptotic theory allows for the construction of confidence intervals using the asymptotic distribution of the estimators
Standard errors of estimators
- Standard errors measure the uncertainty associated with the parameter estimates
- They are calculated as the square root of the asymptotic variance of the estimators
- Smaller standard errors indicate more precise estimates and narrower confidence intervals
Asymptotic confidence intervals
- Asymptotic confidence intervals are constructed using the point estimate and the standard error
- For a 95% confidence interval, the lower and upper bounds are typically the point estimate plus or minus 1.96 times the standard error
- These intervals are valid for large samples and rely on the asymptotic normality of the estimators
Bootstrapping confidence intervals
- Bootstrapping is a resampling technique that can be used to construct confidence intervals without relying on asymptotic theory
- It involves repeatedly drawing samples with replacement from the original data and calculating the estimates for each bootstrap sample
- The distribution of the bootstrap estimates approximates the sampling distribution of the estimator, allowing for the construction of percentile or pivotal confidence intervals
Asymptotic properties of maximum likelihood estimators
- Maximum likelihood estimation (MLE) is a popular method for estimating the parameters of a model by maximizing the likelihood function
- MLE has desirable asymptotic properties, making it a powerful tool in econometrics
Consistency of ML estimators
- Under certain regularity conditions, maximum likelihood estimators are consistent
- These conditions include the correct specification of the model, the identifiability of the parameters, and the existence of a unique global maximum of the likelihood function
- Consistency ensures that the ML estimates converge to the true parameter values as the sample size increases
Asymptotic normality of ML estimators
- ML estimators are asymptotically normally distributed under appropriate regularity conditions
- The asymptotic variance of the ML estimator is given by the inverse of the Fisher information matrix
- This property allows for the construction of confidence intervals and hypothesis tests based on the normal distribution
Asymptotic efficiency of ML estimators
- ML estimators are asymptotically efficient, meaning they achieve the Cramรฉr-Rao lower bound for the variance
- This efficiency property holds under the correct specification of the model and certain regularity conditions
- Asymptotically efficient ML estimators provide the most precise estimates among all consistent estimators
Asymptotic properties of generalized method of moments estimators
- The generalized method of moments (GMM) is a flexible estimation framework that includes many other estimators as special cases (OLS, IV, ML)
- GMM estimators have attractive asymptotic properties under certain conditions
Consistency of GMM estimators
- GMM estimators are consistent under the assumption that the moment conditions are correctly specified and the parameters are identified
- Consistency requires that the number of moment conditions is at least as large as the number of parameters and that the moment conditions hold in the population
- The consistency of GMM estimators is robust to certain types of misspecification (heteroskedasticity, serial correlation)
Asymptotic normality of GMM estimators
- GMM estimators are asymptotically normally distributed under suitable regularity conditions
- The asymptotic variance of the GMM estimator depends on the choice of the weighting matrix, which determines the relative importance of the moment conditions
- The optimal weighting matrix is the inverse of the variance-covariance matrix of the moment conditions, which leads to the most efficient GMM estimator
Efficiency of GMM estimators
- The efficiency of GMM estimators depends on the choice of the moment conditions and the weighting matrix
- With the optimal weighting matrix, GMM estimators are asymptotically efficient and reach the Cramรฉr-Rao lower bound
- However, the optimal weighting matrix is typically unknown and must be estimated, which can affect the finite sample properties of the estimator (two-step GMM, iterated GMM)