Assessing the process used to synthesise the evidence in clinical practice guidelines (CPGs) enables users to determine the trustworthiness of the recommendations. We aimed to assess whether systematic methods were used when synthesizing the evidence for CPGs; and whether reviews or ‘overviews of reviews’ were cited in support of recommendations.
Methods and Analysis:
We followed a study protocol. CPGs published in 2017 and 2018 were retrieved from TRIP and Epistemonikos. We randomly sorted and sequentially screened the CPGs to select the first 50 that met our inclusion criteria. Our primary outcomes were the numbers and proportions of recommendations that were based on reviews and ‘overviews‘, and CPGs using either a systematic or non-systematic process to gather, assess, and synthesise evidence. We also looked for evidence that critical appraisal was conducted. We also performed a chi-square test of independence to examine the relationship between variables.
Of the 50 guidelines, 34% did an exceptional job in systematically synthesising the evidence to inform recommendations. These guidelines clearly reported their objectives and eligibility criteria, conducted comprehensive search strategies, and assessed the quality of the studies. 66% of CPGs reported non-systematic methods to develop their recommendations. This percentage is likely an underestimation because we excluded some CPGs when selecting studies. Overall, 90% of CPGs cited reviews to inform recommendations, and one fifth cited a Cochrane systematic review. Of the 29 CPGs that included reviews, 21% critically appraised the review. 60% of CPGs assessed the quality of primary studies.
We used novel methodology to evaluate recommendations in a random sample of CPGs, and found that 62% did not use a systematic process to gather, appraise, and synthesise the evidence. Significant improvement is needed in the conduct and reporting of CPG methods. Guideline developers should use systematic methods endorsed by reputable evidence synthesis organisations.