Once you have gathered all unique references from your search, it's time to systematically determine what relevant via the (1) Title and Abstract Screening, and (2) Full Text Review.
Using two reviewers per article a both stages and deciding how disagreements will be resolved ahead of time are two mechanism for reducing risk of bias in this phase of the systematic review.
There are two primary stages during eligibility screening: (1) Title & Abstract, and (2) Full text review. Prior to the initial screening, you must remove duplicates references; before full text review, you must find full text. Each of these items are covered in the following tabs.
This graphic is adapted from the PRISMA 2020 flowchart and the Collaboration for Environmental Evidence (CEE) Figure 6.1. The eligibility screening process for systematic reviews or systematic maps.
At the end of this process, you will have a corpus or body of literature that is relevant to your review scope - this corpus moves onto the next steps in the review. You will also have a list of excluded references with reasons for exclusion, documentation of efforts to seek further information and a list of references for which decisions were still unclear, as well as interrater reliability score(s) (covered in the last tab in this box) for both the title and abstract and full text reviews.
Before screening, remove duplicate records. If you conduct a proper comprehensive search, you will get many duplicates by searching several sources. Failure to remove duplicates before screening will result in unnecessary time and effort expended to review the same material more than once.
Use (a) citation managers to remove duplicates manually, and/or (b) review managers to remove duplicates automatically.
(a) Citation managers are able to identify duplicate records - although the process for removing duplicates requires a reviewer to manually determine whether each match is a true duplication or not.
(b) Review software, like Covidence will automatically identify and isolate duplicate records in a 'duplicates' folder upon importing. These duplicate records are removed from the main corpus that will undergo review, but are retained in the project so that reviewers can manually check to ensure accuracy. If a unique record is erroneously removed, these software allow for the record to be moved back to the main corpus.
Note: it is very likely that not all duplicates will be identified during this process - duplicate records may also be found during the critical appraisal and data extraction phase(s). The PRISMA flowchart includes space to document duplicates identified at later stages, too.
During the initial review, 2 reviewers screen the title and/or abstract of all unique records retrieved through the comprehensive search. This is a quick scan (only seconds spent on each reference) to determine whether a reference is relevant (and should be included) or is not relevant (and should be excluded) based on your predefined eligibility criteria.
It is best practice to also have a "maybe" or "can't tell" option for reviewers, as it will not be possible to tell if every article meets your eligibility criteria based on title and abstract alone. References identified as "maybe" should be treated as "included" at this stage, and move onto the full-text review.
The review team will also need tp track interrater reliability and agree on a method for resolving conflicts ahead of time.
At this stage the team may not need to report any justification for the exclusion of articles in the (PRISMA) flowchart. However, it's great to keep track of reasons for exclusion at all stages of review.
It is possible that the full text is not accessible for all of your references. Be sure to report this in your PRISMA flowchart.
Some material will be Open Access, which means all of the resource (including full text) is accessible regardless of affiliation. However, the full text for many references are behind paywalls. Institutions like Virginia Tech purchase access to many databases through, but no institution has access to all databases.
For material that we have access to, try using EndNote to quickly find full text of included articles by following these steps:
Note: if you haven't already, you'll need to first link EndNote to your institution.
Unfortunately, most full text documents will not be retrievable through the automated tool in EndNote. Your team will need to manually retrieve full text for all included references by searching the web. Using tools like the LibKey Nomad browser extension makes accessing material that is accessible to you super easy.
Sometimes, the full-text will be easy to find and access - sometimes, you will need to request the full-text through the Inter-Library Loan (ILL) system by logging into your VT account and filling out a brief form.
Only articles included or that the team categorized as "maybe" during the title and abstract screening will move onto the full-text review. At this stage, 2 reviewers review the full-text of each reference to ensure it aligns with the inclusion criteria and does not fit any of your exclusion criteria. As is the case with the title & abstract screening, it is important to resolve any conflicts via a prespecified method during the full text review.
At this stage, you'll be required to report the reasons for excluding references. If the full text and supplemental material does not include enough information to determine eligibility, it is best practice to contact the corresponding author(s) directly.
The rate of agreement between reviewers can be calculated using an interrater reliability (IRR) score. It is important to calculate and report this score so that readers of your review understand the risk of bias in determining what is relevant (or not).
The score should be calculated for each stage (e.g., title and abstract screening; full-text review) independently.
In the pilot the IRR should help inform whether to adjust the review protocol and/or forms for a particular stage. For example, if there is low IRR, this is an indicator that there is a problem with the protocol, form, or the reviewer(s) understanding of the protocol or form.
If you want to calculate IRR yourself, there are two approaches:
IRR = # of references for which reviewers agreed / total # of references reviewed
IRR = (po – pe) / (1 – pe)
po: Relative observed agreement among raters
pe: Hypothetical probability of chance agreement
From: Statology - What is Inter-rater Reliability? (https://www.statology.org/inter-rater-reliability/)
There are two basic methods for handling disagreement between the 2 (or more) reviewers.
Note: regardless of how your team plans to resolve conflicts during these stages, you must thoroughly document the process and transparently report how disagreements were resolved (PRISMA Item 8, 9, 11, 14, and 15).
Using a consensus-based process means the team will gather as a group to discuss disagreements and come to a consensus for each disagreement. Sometimes disagreements will stem from unclear eligibility criteria - so the solution may be to clarify the criteria. This kind of disagreement may arise while reviewing multiple independent references, thus the solution would resolve conflict over multiple independent references. In some cases, a single reference will be under dispute, in which case a unique decision may be required.
Regardless of how many references are impacted by a decision, be sure to document and report how you come to consensus and describe any deviations from the protocol.
The tie-breaker approach relies on one (possibly more) individual(s) who do are not one of the 2 reviewers responsible for reviewing all references at a stage (e.g., title and abstract screening, critical appraisal). The tie-breaker is typically someone more experienced in the field, often acting as the PI.
In this approach, the tie-breaker simply makes a decision on their own without input from the rest of the team. Sometimes the tie-breaker will choose between responses provided by the 2 reviewers - sometimes the tie-breaker will provide a unique response that differs from the 2 reviewers.
Be sure to document and report how you come to consensus and describe any deviations from the protocol.
Chapter 4: Searching and Selecting Studies provides guidance for both the search (link) and screening/review
Conducting systematic reviews of intervention questions II: Relevance screening, data extraction, assessing risk of bias, presenting the results and interpreting the findings. Sargeant JM, O’Connor AM. Zoonoses Public Health. 2014 Jun;61 Suppl 1:39-51. doi: 10.1111/zph.12124. PMID: 24905995
C39. Making inclusion decisions in duplicate (protocol & review / final manuscript)
C40. Including studies without useable data (protocol & review / final manuscript)
C41. Documenting decisions about records identified (protocol & review / final manuscript)
C42. Collating multiple reports [from same study] (protocol & review / final manuscript)
...mechanism(s) that will be used to manage records and data throughout the review...
...process that will be used for selecting studies (such as two independent reviewers) through each phase of the review (screening, eligibility, and inclusion in meta-analysis)
For the screening and review stages, PRISMA requires specific items to be addressed in both the methods and results section.
One of the standard methods for reporting the screening and review for relevance stages is in a flowchart, such as the PRISMA flow diagram. In the PRISMA 2020 update, variations of the flow diagram were added to accommodate reporting of different approaches:
Below, we annotate the first PRISMA flowchart option as this is the most often used. Note that material identified via other search methods tend not to go through the same screening and review process, and would thus be added to the final corpus, joining only at the end.
In addition to the flow diagram, it is best practice to maintain a more detailed record using a citation manager, spreadsheet, etc., to easily locate specific articles within the corpus from initial retrieval through the screening process. These records can be shared as supplemental material upon publication. At minimum, you should archive this documentation in a personal location so that your team can address questions from readers in the future.