Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Systematic Reviews and Meta-Analyses: Critical Appraisal

This guide is designed to help novice and experienced review teams navigate the systematic review and/or meta-analysis process

All relevant studies must undergo a critical appraisal to evaluate the risk of bias, or internal and external validity, of all relevant references.

This step often occurs simultaneously with the Data Extraction phase. It is a vital stage of the systematic review process to uphold the cornerstone of reducing bias.


Critical Appraisal

Critical Appraisal 

Critical appraisal is also referred to as quality assessment, risk of bias assessment, and similar variations. Sometimes the critical appraisal phase is confused with the assessment of certainty of evidence - although related, these are independent stages of the systematic review process.

According to the Center for Evidence-Based Medicine (CEBM): 

"Critical appraisal is the process of carefully and systematically assessing the outcome of scientific research (evidence) to judge its trustworthiness, value and relevance in a particular context. Critical appraisal looks at the way a study is conducted and examines factors such as internal validity, generalizability and relevance."

Systematic reviews require a formal, systematic, uniform appraisal of the quality - or risk of bias - of all relevant studies. In a critical appraisal, you are examining the methods not the results.


Process Details

Use risk of bias tools for this stage - these tools are often formatted as checklists. You can find more about risk of bias tools in the next tab! If a refresher of some common biases, definitions, and examples is helpful, check out the Catalogue of Bias from the University of Oxford and CEBM.

Just like the other stages of a systematic review, 2 reviewers should assess risk of bias in each reference. As such, your team should calculate and report interrater reliability, deciding ahead of time how to resolve conflicts. Oftentimes the critical appraisal occurs at the same time as data extraction.

In addition to the formal risk of bias assessment, your team should also consider meta-biases like publication bias, selective reporting, etc. Search for errata and retractions related to included research, and consider other limitations of and concerns about the included studies and how this may impact the reliability of your review.


Note: Subjectivity of Critical Appraisal 

The critical appraisal is inherently subjective, from the selection of the RoB tool(s) to the final assessment of each study. Therefore, it is important to consider how tools compare, and how this process may impact the results of your review. Check out these studies evaluating Risk of Bias Tools:

Page MJ, McKenzie JE, Higgins JPT Tools for assessing risk of reporting biases in studies and syntheses of studies: a systematic review BMJ Open 2018;8:e019703. doi: 10.1136/bmjopen-2017-019703

Losilla, J.-M., Oliveras, I., Marin-Garcia, J. A., & Vives, J. (2018). Three risk of bias tools lead to opposite conclusions in observational research synthesis. Journal of Clinical Epidemiology101, 61–72. https://doi.org/10.1016/j.jclinepi.2018.05.021

Margulis, A. V., Pladevall, M., Riera-Guardia, N., Varas-Lorenzo, C., Hazell, L., Berkman, N., Viswanathan, M., & Perez-Gutthann, S. (2014). Quality assessment of observational studies in a drug-safety systematic review, comparison of two tools: The Newcastle-Ottawa Scale and the RTI item bankClinical Epidemiology, 359. https://doi.org/10.2147/CLEP.S66677

Select Risk of Bias Tool(s)

When you think of a critical appraisal in a systematic review and/or meta-analysis, think of assessing the risk of bias of included studies. The potential biases to consider will vary by study design. Therefore, risk of bias tool(s) should be selected based on the designs of included studies. If you include more than one study design, you'll include more than one risk of bias tool. Whenever possible, select tools developed for a discipline relevant to your topic.


Risk of Bias Tools

Risk of bias tools are simply checklists used to consider bias specific to a study design, and sometimes discipline. 


Risk of Bias Toolsets

Risk of bias toolsets are a series of tools developed by the same group or organization, where each tool addresses a specific study design. The organization is usually discipline specific. Note that many also include a systematic review and/or meta-analysis quality assessment tool, but that these tools will not be useful during this stage as existing reviews will not be folded into your synthesis.

Critical Appraisal Skills Programme (CASP) Checklists include tools for:

  • Randomized Controlled Trials 
  • Qualitative Studies
  • Cohort Study
  • Diagnostic Study
  • Case Control Study
  • Economic Evaluation
  • Clinical Prediction Rule 

National Institutes of Health (NIH) Study Quality Assessment Tools include tools for:

  • Controlled intervention studies
  • Observational cohort and cross-sectional studies
  • Case-control studies
  • Before-after (pre-post) studies without control
  • Case series studies

Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) includes tools for:

  • Cohort
  • Case-control
  • Cross-sectional
  • Conference abstracts

Joanna Briggs Institute (JBI) Manual for Evidence Synthesis includes the following tools found in respective relevant chapters:


Risk of Bias Tool Repositories

Risk of bias tool repositories are curated lists of existing tools - kind of like what we've presented above. Although we update this guide with new tools as we find them, these repositories may contain additional resources:

Presenting Critical Appraisal Results

Risk of bias within each reference should be presented in a table like the one seen below. Studies are presented along the y-axis and biases considered (what is addressed by the tool) along the x-axis, such that each row belongs to a study, and each column belongs to a bias (or domain/category of biases).

Example - Graphic representation of risk of bias within each study 


It is also best practice to present the bias across the included set of literature (seen below). Each bias or bias category is represented as a row and each row is associated with a bar showing the percentage of the total included literature that was rated as low risk, some risk, high risk, or unable to determine the risk. 

 Example - Graphic representation of risk of bias across each study


The images above can be created using the ROBVIS package of metaverse for evidence synthesis in R. You can create your own graphics without using this software.

Methodological Guidance

Cochrane Handbook - Part 2: Core Methods

Chapter 7: Considering bias and conflicts of interest among the included studies

  • 7.2 Empirical evidence of bias
  • 7.3 General procedures for risk-of-bias assessment
  • 7.4 Presentation of assessment of risk of bias
  • 7.5 Summary assessments of risk of bias 
  • 7.6 Incorporating assessment of risk of bias into analyses 
  • 7.7 Considering risk of bias due to missing results
  • 7.8 Considering source of funding and conflict of interest of authors of included studies 

Chapter 8: Assessing risk of bias in randomized trial

  • 8.2 Overview of RoB 2
  • 8.3 Bias arising from the randomization process
  • 8.4 Bias due to deviations from intended interventions
  • 8.5 Bias due to missing outcome data 
  • 8.6 Bias in measurement of the outcome
  • 8.7 Bias in selection of the reported result
  • 8.8 Differences from the previous version of the tool

Chapter 25: Risk of bias in non-randomized studies

SYREAF Tutorials

Step 3: Identifying eligible papers

Conducting systematic reviews of intervention questions II: Relevance screening, data extraction, assessing risk of bias, presenting the results and interpreting the findings. Sargeant JM, O’Connor AM. Zoonoses Public Health. 2014 Jun;61 Suppl 1:39-51. doi: 10.1111/zph.12124. PMID: 24905995

Campbell - MECCIR

C51. Assessing risk of bias / study quality (protocol & review / final manuscript)

C52. Assessing risk of bias / study quality in duplicate (protocol & review / final manuscript)

C53. Supporting judgements of risk of bias / study quality (review / final manuscript)

C54. Providing sources of information for risk of bias / study quality assessments (review / final manuscript)

C55. Differentiating between performance bias and detection bias (protocol & review / final manuscript)

C56. If applicable, assessing risk of bias due to lack of blinding for different outcomes (review / final manuscript)

C57. If applicable, assessing completeness of data for different outcomes (review / final manuscript)

C58. If applicable, summarizing risk of bias when using the Cochrane Risk of Bias tool (review / final manuscript)

C59. Addressing risk of bias / study quality in the synthesis (review / final manuscript)

C60. Incorporating assessments of risk of bias (review / final manuscript)

Reporting in Protocol and Final Manuscript

In the Protocol | PRISMA-P

Risk of Bias Individual Studies (Item 14)

...planned approach to assessing risk of bias should include the constructs being assessed and a definition for each, reviewer judgment options (high, low, unclear), the number of assessors...training, piloting, previous risk of bias assessment experience...method(s) of assessment (independent or in duplicate)...

Protocol for reporting results

"...summarise risk of bias assessments across studies or outcomes..."

 

Protocol for reporting impact on synthesis

"...describe how risk of bias assessments will be incorporated into data synthesis (that is, subgroup or sensitivity analyses) and their potential influence on findings of the review (Item 15c) in the protocol..."

In the Final Manuscript | PRISMA

For the critical appraisal stage, PRISMA requires specific items to be addressed in both the methods and results section.

Study Risk of Bias Assessment (Item 11; report in methods)

Essential Items
  • Specify the tool(s) (and version) used to assess risk of bias in the included studies.
  • Specify the methodological domains/components/items of the risk of bias tool(s) used.
  • Report whether an overall risk of bias judgment that summarised across domains/components/items was made, and if so, what rules were used to reach an overall judgment.
  • If any adaptations to an existing tool to assess risk of bias in studies were made (such as omitting or modifying items), specify the adaptations.
  • If a new risk of bias tool was developed for use in the review, describe the content of the tool and make it publicly accessible.
  • Report how many reviewers assessed risk of bias in each study, whether multiple reviewers worked independently (such as assessments performed by one reviewer and checked by another), and any processes used to resolve disagreements between assessors.
  • Report any processes used to obtain or confirm relevant information from study investigators.
  • If an automation tool was used to assess risk of bias in studies, report how the automation tool was used (such as machine learning models to extract sentences from articles relevant to risk of bias88), how the tool was trained, and details on the tool’s performance and internal validation

Risk of Bias in Studies (Item 18; report in results)

Essential Items
  • Present tables or figures indicating for each study the risk of bias in each domain/component/item assessed and overall study-level risk of bias.
  • Present justification for each risk of bias judgment—for example, in the form of relevant quotations from reports of included studies.
Additional Items

If assessments of risk of bias were done for specific outcomes or results in each study, consider displaying risk of bias judgments on a forest plot, next to the study results, so that the limitations of studies contributing to a particular meta-analysis are evident (see Sterne et al86 for an example forest plot).


Decorative - a recording on this topic is available!

We host a workshop each fall on critical appraisal, check out our latest recording!