Researchers and organizations often use evidence from randomized controlled trials (RCTs) to determine the efficacy of a treatment or intervention under ideal conditions, while studies of observational designs are used to measure the effectiveness of an intervention in non-experimental, 'real world' scenarios. Sometimes, the results of RCTs and observational studies addressing the same question may have different results. This review explores the questions of whether these differences in results are related to the study design itself, or other study characteristics.
This review summarizes the results of methodological reviews that compare the outcomes of observational studies with randomized trials addressing the same question, as well as methodological reviews that compare the outcomes of different types of observational studies.
The main objectives of the review are to assess the impact of study design--to include RCTs versus observational study designs (e.g. cohort versus case-control designs) on the effect measures estimated, and to explore methodological variables that might explain any differences.
We searched multiple electronic databases and reference lists of relevant articles to identify systematic reviews that were designed as methodological reviews to compare quantitative effect size estimates measuring efficacy or effectiveness of interventions of trials with observational studies or different designs of observational studies. We assessed the risks of bias of the included reviews.
Our results provide little evidence for significant effect estimate differences between observational studies and RCTs, regardless of specific observational study design, heterogeneity, inclusion of pharmacological studies, or use of propensity score adjustment. Factors other than study design per se need to be considered when exploring reasons for a lack of agreement between results of RCTs and observational studies.