Skip to Main Content

Systematic Reviews and Meta-Analyses: Eligibility Screening

This guide is designed to help novice and experienced review teams navigate the systematic review and/or meta-analysis process

Once you have gathered all unique references from your search, it's time to systematically determine which of those references are relevant to your defined scope via the (1) Title and Abstract Screening, and (2) Full Text Review.

Using two reviewers per article at both stages and deciding how disagreements will be resolved ahead of time are two mechanism for reducing risk of bias in this phase of the systematic review.


Screening & Review Steps

Eligibility Screening Process

There are two primary stages during eligibility screening: (1) Title & Abstract, and (2) Full text review. Prior to the initial screening, you must remove duplicates references; before full text review, you must find full text. Each of these items are covered in the following tabs.

Illustration of screening process; graphical overview. Process is also covered in text on this page.

This graphic is adapted from the PRISMA 2020 flowchart and the Collaboration for Environmental Evidence (CEE) Figure 6.1. The eligibility screening process for systematic reviews or systematic maps.


Intended Outcomes

At the end of this process, you will have a corpus or body of literature that is relevant to your review scope - this corpus moves onto the next steps in the review. You will also have a list of excluded references with reasons for exclusion, documentation of efforts to seek further information and a list of references for which decisions were still unclear, as well as interrater reliability score(s) (covered in the last tab in this box) for both the title and abstract and full text reviews. 

DecorativeStep 1a: Remove Duplicate Records

Before screening, remove duplicate records. If you conduct a proper comprehensive search, you will get many duplicates by searching several sources. Failure to remove duplicates before screening will result in unnecessary time and effort expended to review the same material more than once.

Use (a) citation managers to remove duplicates manually, and/or (b) review managers to remove duplicates automatically.

(a) Citation managers are able to identify duplicate records - although the process for removing duplicates requires a reviewer to manually determine whether each match is a true duplication or not. 

(b) Review software, like Covidence will automatically identify and isolate duplicate records in a 'duplicates' folder upon importing. These duplicate records are removed from the main corpus that will undergo review, but are retained in the project so that reviewers can manually check to ensure accuracy. If a unique record is erroneously removed, these software allow for the record to be moved back to the main corpus. 

Note: it is very likely that not all duplicates will be identified during this process - duplicate records may also be found during the critical appraisal and data extraction phase(s). The PRISMA flowchart includes space to document duplicates identified at later stages, too. 

Step 1b: Title & Abstract ScreeningDecorative

During the initial review, 2 reviewers screen the title and/or abstract of all unique records retrieved through the comprehensive search. This is a quick scan (only seconds spent on each reference) to determine whether a reference is relevant (and should be included) or is not relevant (and should be excluded) based on your predefined eligibility criteria.

It is best practice to also have a "maybe" or "can't tell" option for reviewers, as it will not be possible to tell if every article meets your eligibility criteria based on title and abstract alone. References identified as "maybe" should be treated as "included" at this stage, and move onto the full-text review. 

The review team will also need to track interrater reliability and agree on a method for resolving conflicts ahead of time.

At this stage the team may not need to report any justification for the exclusion of articles in the (PRISMA) flowchart. However, it's great to keep track of reasons for exclusion at all stages of review.

DecorativeStep 2a: Find Full Text

It is possible that the full text is not accessible for all of your references. Be sure to report this in your PRISMA flowchart

Some material will be Open Access, which means all of the resource (including full text) is accessible regardless of affiliation. However, the full text for many references are behind paywalls. Institutions like Virginia Tech purchase access to many databases through, but no institution has access to all databases. 


Find Full Text (automated)

For material that we have access to, try using EndNote to quickly find full text of included articles by following these steps:

  1. Upload the RIS file of only included and "maybe" records (note: this RIS file can be downloaded easily from review software like Covidence)
  2. In the newly created folder containing your included and maybe records, select a reference and use the hotkey ctrl + a to select all
  3. Right click and select "Find Full Text PDFs" (or a variation of this)

Note: if you haven't already, you'll need to first link EndNote to your institution.


Find Full Text (manually), Inter-Library Loan (ILL)

Unfortunately, most full text documents will not be retrievable through the automated tool in EndNote. Your team will need to manually retrieve full text for all included references by searching the web. Using tools like the LibKey Nomad browser extension makes accessing material that is accessible to you super easy.

Sometimes, the full-text will be easy to find and access - sometimes, you will need to request the full-text through the Inter-Library Loan (ILL) system by logging into your VT account and filling out a brief form.


Finding Translations for publications not in your teams' language(s)

Often, teams will exclude articles that are not published in the language(s) of those on the team. Unfortunately, this tends to reproduce existing biases favoring English-language publications. A 2019 article by Jackson, et al., concludes that "Google Translate is a viable, accurate tool for translating non–English-language trials for the purpose of conducting systematic reviews."

Of course accuracy would be likely to vary if this study was conducted with articles in different disciplines, languages, or using publication-type, but it's a great start and better than simply excluding other languages altogether.

Step 2b: Full Text ReviewDecorative

Only articles included or that the team categorized as "maybe" during the title and abstract screening will move onto the full-text review. At this stage, 2 reviewers review the full-text of each reference to ensure it aligns with the inclusion criteria and does not fit any of your exclusion criteria. As is the case with the title & abstract screening, it is important to resolve any conflicts via a prespecified method during the full text review.

At this stage, you'll be required to report the reasons for excluding references. If the full text and supplemental material does not include enough information to determine eligibility, it is best practice to contact the corresponding author(s) directly.

Any references included at this stage will  undergo citation searching and move onto the critical appraisal and data extraction phase.

Interrater Reliability to measure disagreementDecorative

The rate of agreement between reviewers can be calculated using an interrater reliability (IRR) score. It is important to calculate and report this score so that readers of your review understand the risk of bias in determining what is relevant (or not). 

The score should be calculated for each stage (e.g., title and abstract screening; full-text review) independently

In the pilot the IRR should help inform whether to adjust the review protocol and/or forms for a particular stage. For example, if there is low IRR, this is an indicator that there is a problem with the protocol, form, or the reviewer(s) understanding of the protocol or form. 


Calculating IRR

Covidence automatically calculates the IRR for title and abstract screening and full-text reviews, including Cohen's Kappa. See their knowledge base for more information.

If you want to calculate IRR yourself, there are two approaches:

Percentage Agreement

IRR = # of references for which reviewers agreed / total # of references reviewed

 

Cohen's Kappa

IRR = (po – pe) / (1 – pe)

po: Relative observed agreement among raters

pe: Hypothetical probability of chance agreement

 

From: Statology - What is Inter-rater Reliability? (https://www.statology.org/inter-rater-reliability/)


Resolve disagreement

There are two basic methods for handling disagreement between the 2 (or more) reviewers. 

Note: regardless of how your team plans to resolve conflicts during these stages, you must thoroughly document the process and transparently report how disagreements were resolved (PRISMA Item 8, 9, 11, 14, and 15). 

 

Coming to Consensus Decorative

Using a consensus-based process means the team will gather as a group to discuss disagreements and come to a consensus for each disagreement. Sometimes disagreements will stem from unclear eligibility criteria - so the solution may be to clarify the criteria. This kind of disagreement may arise while reviewing multiple independent references, thus the solution would resolve conflict over multiple independent references. In some cases, a single reference will be under dispute, in which case a unique decision may be required.

Regardless of how many references are impacted by a decision, be sure to document and report how you come to consensus and describe any deviations from the protocol.

 

Using a Tie-BreakerDecorative

The tie-breaker approach relies on one (possibly more) individual(s) who do are not one of the 2 reviewers responsible for reviewing all references at a stage (e.g., title and abstract screening, critical appraisal). The tie-breaker is typically someone more experienced in the field, often acting as the PI.

In this approach, the tie-breaker simply makes a decision on their own without input from the rest of the team. Sometimes the tie-breaker will choose between responses provided by the 2 reviewers - sometimes the tie-breaker will provide a unique response that differs from the 2 reviewers. 

Be sure to document and report how you come to consensus and describe any deviations from the protocol.

Methodological Guidance

Cochrane Handbook - Part 2: Core Methods

Chapter 4: Searching and Selecting Studies provides guidance for both the search (link) and screening/review

  • 4.6 Selecting Studies
    • 4.6.1 Studies (not reports) as unit of interest
    • 4.6.2 Identifying multiple reports from the same study
    • 4.6.3 A typical process for selecting studies
    • 4.6.4 Implementation of the selection process
    • 4.6.5 Selecting 'excluded studies' 
    • 4.6.6. Software support for selecting studies

SYREAF Protocol

Step 3: Identifying eligible papers

Conducting systematic reviews of intervention questions II: Relevance screening, data extraction, assessing risk of bias, presenting the results and interpreting the findings. Sargeant JM, O’Connor AM. Zoonoses Public Health. 2014 Jun;61 Suppl 1:39-51. doi: 10.1111/zph.12124. PMID: 24905995

Campbell - MECCIR

C39. Making inclusion decisions in duplicate (protocol & review / final manuscript)

C40. Including studies without useable data (protocol & review / final manuscript)

C41. Documenting decisions about records identified (protocol & review / final manuscript)

C42. Collating multiple reports [from same study] (protocol & review / final manuscript)

Reporting in Protocol and Final Manuscript

In the Protocol | PRISMA-P

Data Management (Item 11a)

...mechanism(s) that will be used to manage records and data throughout the review...


Selection Process (Item 11b)

...process that will be used for selecting studies (such as two independent reviewers) through each phase of the review (screening, eligibility, and inclusion in meta-analysis)

In the Final Manuscript | PRISMA

For the screening and review stages, PRISMA requires specific items to be addressed in both the methods and results section.

Selection Process (Item 8; report in methods)

Essential Items
  • Report how many reviewers screened each record (title/abstract) and each report retrieved, whether multiple reviewers worked independently (that is, were unaware of each other’s decisions) at each stage of screening or not (for example, records screened by one reviewer and exclusions verified by another), and any processes used to resolve disagreements between screeners (for example, referral to a third reviewer or by consensus).
  • Report any processes used to obtain or confirm relevant information from study investigators.
  • If abstracts or articles required translation into another language to determine their eligibility, report how these were translated (for example, by asking a native speaker or by using software programs)

Study Selection (Item 16; report in results)

Essential Items
  • Report, ideally using a flow diagram, the number of: records identified; records excluded before screening (for example, because they were duplicates or deemed ineligible by machine classifiers); records screened; records excluded after screening titles or titles and abstracts; reports retrieved for detailed evaluation; potentially eligible reports that were not retrievable; retrieved reports that did not meet inclusion criteria and the primary reasons for exclusion (such as ineligible study design, ineligible population); and the number of studies and reports included in the review. If applicable, authors should also report the number of ongoing studies and associated reports identified (Item 16a)
  • If the review is an update of a previous review, report results of the search and selection process for the current review and specify the number of studies included in the previous review. An additional box could be added to the flow diagram indicating the number of studies included in the previous review (Item 16a)
  • If applicable, indicate in the PRISMA flow diagram how many records were excluded by a human and how many by automation tools (Item 16a)
  • Cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded (Item 16b)

The PRISMA Flowchart

One of the standard methods for reporting the screening and review for relevance stages is in a flowchart, such as the PRISMA flow diagram. In the PRISMA 2020 update, variations of the flow diagram were added to accommodate reporting of different approaches:

Below, we annotate the first PRISMA flowchart option as this is the most often used. Note that material identified via other search methods tend not to go through the same screening and review process, and would thus be added to the final corpus, joining only at the end. 

annotated PRISMA 2020 flowchart


 In addition to the flow diagram, it is best practice to maintain a more detailed record using a citation manager, spreadsheet, etc., to easily locate specific articles within the corpus from initial retrieval through the screening process. These records can be shared as supplemental material upon publication. At minimum, you should archive this documentation in a personal location so that your team can address questions from readers in the future.  

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License unless otherwise noted on a specific visual or other content item.