Sensitivity analysis is a crucial tool in causal inference, helping researchers assess how robust their findings are to potential violations of key assumptions. It involves systematically varying the strength of unmeasured confounders or simulating different scenarios to quantify uncertainty in causal estimates.
By conducting sensitivity analyses, researchers can transparently communicate the range of plausible causal effects under different assumptions. This approach allows for a more nuanced understanding of causal relationships, highlighting the importance of carefully considering untestable assumptions in observational studies.
Sensitivity analysis overview
Definition of sensitivity analysis
- Sensitivity analysis assesses the robustness of causal estimates to potential unmeasured confounding or violations of key assumptions
- Involves systematically varying the strength of unmeasured confounders or simulating potential outcomes under different scenarios
- Helps quantify the degree to which causal conclusions might change under alternative assumptions
Purpose in causal inference
- Causal inference relies on strong assumptions (exchangeability, positivity, consistency) that are often untestable in observational studies
- Sensitivity analysis provides a way to assess the plausibility of these assumptions and the robustness of causal estimates to potential violations
- Allows researchers to transparently communicate the uncertainty surrounding causal conclusions and identify the most critical assumptions
Methods for conducting sensitivity analysis
Varying unmeasured confounder strength
- One approach to sensitivity analysis is to posit the existence of an unmeasured confounder and vary its strength of association with treatment and outcome
- This can be done by simulating the confounder's distribution and specifying a range of plausible values for its effect on treatment and outcome
- The causal estimate is then re-calculated under each scenario to assess how much it changes as the unmeasured confounder strength varies
Simulating potential outcomes
- Another approach is to directly simulate the potential outcomes under different treatment assignments, based on the observed data and assumptions about the causal mechanism
- This allows for assessing the sensitivity of causal estimates to violations of the consistency assumption or the presence of interference between units
- The distribution of simulated potential outcomes can be used to construct bounds on the true causal effect
Bounds on treatment effects
- Sensitivity analysis can also be used to derive bounds on the treatment effect, rather than point estimates
- This involves specifying a range of plausible values for the sensitivity parameters (unmeasured confounder strength, potential outcome distributions) and calculating the corresponding range of possible treatment effects
- Bounds provide a more transparent way to communicate the uncertainty in causal estimates and the range of conclusions that are consistent with the data and assumptions
Interpreting sensitivity analysis results
Robustness of causal estimates
- The primary goal of sensitivity analysis is to assess the robustness of causal estimates to potential violations of key assumptions
- If the causal estimate remains relatively stable across a wide range of sensitivity parameters, this suggests that the conclusion is robust and less sensitive to unmeasured confounding or other violations
- Conversely, if the estimate changes substantially or even reverses sign under plausible scenarios, this indicates that the conclusion is more sensitive and uncertain
Threshold for effect reversal
- One key metric in sensitivity analysis is the threshold for effect reversal, which is the minimum strength of unmeasured confounding required to explain away the observed treatment effect
- This threshold can be expressed in terms of the association between the unmeasured confounder and treatment/outcome, or the proportion of variation in treatment/outcome that would need to be explained by the confounder
- A high threshold for reversal suggests that the causal conclusion is more robust, while a low threshold indicates greater sensitivity to unmeasured confounding
Communicating uncertainty
- Sensitivity analysis provides a way to transparently communicate the uncertainty surrounding causal conclusions, rather than presenting point estimates as definitive
- Results should be presented in terms of the range of plausible causal effects under different assumptions, along with the threshold for effect reversal
- This allows readers to assess the credibility of the causal claim and understand the degree to which it depends on untestable assumptions
Sensitivity analysis in practice
Selecting key assumptions to test
- In practice, sensitivity analysis requires carefully selecting the most critical assumptions to test, based on substantive knowledge and the potential for violations
- This may involve focusing on the exchangeability assumption and potential unmeasured confounders, or the consistency assumption and possible interference between units
- The choice of sensitivity parameters should be guided by prior literature, expert opinion, and the specific causal question and study design
Incorporating domain knowledge
- Sensitivity analysis is most informative when it incorporates domain knowledge to specify plausible ranges for sensitivity parameters
- This may involve using prior studies or theoretical arguments to bound the strength of unmeasured confounding, or using substantive expertise to rule out certain causal mechanisms
- Incorporating domain knowledge helps to ensure that sensitivity analyses are grounded in reality and not just mathematical exercises
Limitations of sensitivity analysis
- While sensitivity analysis is a valuable tool for assessing the robustness of causal conclusions, it has some important limitations
- Sensitivity analysis cannot prove that a causal estimate is unbiased, only that it is robust to certain violations of assumptions
- The range of scenarios considered in sensitivity analysis is necessarily limited, and there may be other sources of bias or uncertainty that are not captured
- Sensitivity analysis should be seen as a complement to, not a substitute for, careful study design and data collection to minimize potential confounding and other biases
Sensitivity analysis vs other approaches
Comparison to instrumental variables
- Instrumental variable (IV) methods provide an alternative approach to estimating causal effects in the presence of unmeasured confounding
- IV methods rely on finding a variable that affects treatment but not outcome except through treatment, and using this to estimate the causal effect
- Sensitivity analysis can be used to assess the robustness of IV estimates to violations of the exclusion restriction or other assumptions
Relation to bounding methods
- Bounding methods, such as the no-assumptions bounds or the Manski bounds, provide a way to estimate the range of possible causal effects consistent with the observed data alone
- These methods do not rely on untestable assumptions, but often yield very wide bounds that may not be informative
- Sensitivity analysis can be seen as a way to narrow these bounds by incorporating additional assumptions, while still acknowledging the uncertainty in these assumptions
Sensitivity analysis with multiple confounders
- Most sensitivity analysis methods focus on a single unmeasured confounder, but in reality there may be multiple sources of confounding
- Extending sensitivity analysis to multiple confounders requires specifying the joint distribution of the confounders and their effects on treatment and outcome
- This can quickly become intractable as the number of confounders grows, requiring simplifying assumptions or alternative approaches such as bias formulas or Bayesian methods
Advanced topics in sensitivity analysis
Sensitivity parameters for complex designs
- Sensitivity analysis for complex study designs, such as matched or stratified designs, requires specifying sensitivity parameters that capture the relevant features of the design
- For example, in a matched design, the sensitivity parameter may be the difference in unmeasured confounder prevalence between matched pairs, rather than the overall confounder-outcome association
- Sensitivity analysis for time-varying treatments or outcomes may require specifying parameters for the time-varying confounding or selection bias
Bayesian sensitivity analysis
- Bayesian methods provide a natural framework for incorporating uncertainty about sensitivity parameters into causal estimates
- Prior distributions can be specified for the sensitivity parameters, reflecting the range of plausible values based on domain knowledge
- The posterior distribution of the causal effect can then be calculated, marginalizing over the sensitivity parameters to reflect the overall uncertainty
Sensitivity analysis with missing data
- Missing data is a common problem in observational studies, and can be a source of bias if the missingness is related to treatment, outcome, or confounders
- Sensitivity analysis can be used to assess the robustness of causal estimates to different assumptions about the missing data mechanism
- This may involve specifying a range of plausible values for the missing data, or using pattern mixture models to estimate causal effects under different missingness scenarios