- Analytics use case(s): Population-Level Estimation
- Study type: Methods Research
- Tags: negative controls, Simulation, LEGEND, Hypertension
- Study lead: Erica A Voss
- Study lead forums tag: [ericaVoss]
- Study start date: May 17, 2018
- Study end date: August 20, 2023
- Protocol: None
- Publications: TBD
- Results explorer: Not Applicable
Abstract
Background
Negative controls help assess potential residual biases in observational study designs. These controls involve exposure-outcome pairs where no causal link is believed to exist. Deviations from a null effect hint at residual systematic error. The Common Evidence Model (CEM) developed by the Observational Health Data Sciences and Informatics (OHDSI) Community facilitates the identification of negative controls. However, concerns arise when the null-assumption is breached during selection.
Methods
This study probes the effect of non-null negative controls on empirical calibration through a simulation and a replication study. The analysis involved assessing the impact of these negative controls on the empirical calibration process, examining how robust the calibration remains in the presence of errors.
Results
Empirical calibration remains robust against a few errors in negative control selection and renders results more conservative. Despite some negative controls breaching the null assumption, the empirical calibration process tolerated these deviations as long as the controls did not have strong associations.
Conclusions
While empirical calibration can handle some negative controls that breach the null assumption, it is crucial to avoid controls with strong associations. CEM can aid in filtering inappropriate drug-outcome pairs. Thus, the potential to generate negative control lists that contain pairs that violate the null assumption should not be a driver to change the recommendation that observational studies always include negative controls to derive an empirical null distribution used to compute calibrated p-values.