Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Research Impact Metrics

A guide for those wanting to use research impact metrics for evaluation, analytics, and reviews, e.g., promotion & tenure.

Journal-level Metrics

Definition Clarivate Analytics states that the Journal Impact Factor (JIF) is "a measure of the frequency with which the 'average article' in a journal has been cited in a particular year or period. The annual JCR impact factor is a ratio between citations and recent citable items published."

Calculation

Journal X's 2005 impact factor =

Citations in 2005 to all articles published by Journal X in 2003–2004

divided by

Number of articles deemed to be “citable” by Clarivate Analytics that were published in Journal X in 2003–2004

Source: https://doi.org/10.1371/journal.pmed.0030291 

Data sources

The JIF is a proprietary metric calculated by Clarivate Analytics and sourced from journals indexed in their database, Web of Science, which subsequently indexes the corresponding JIFs for those journals in Journal Citation Reports (JCR).

Platforms /
Databases

Journal Citations Reports

For non-Virginia Tech users, see https://clarivate.com/webofsciencegroup/solutions/journal-citation-reports/)

Also integrated into other platforms by institutions and data providers.

At Virginia Tech, JIF is integrated into Elements when publications are added to faculty members' profiles. 

Appropriate
uses

Useful to compare the influence and prestige of journals within a single discipline

Can also be useful for library collection development decisions.

Limitations and
cautionary uses

Most journals indexed in JCR are primarily in the STEM fields, and therefore, many academics, especially in the arts and humanities, are disadvantaged by the JIF when used as an indicator of performance or impact in their field. 

In addition, the primary research output for academics in the arts and humanities and to some extent, the social sciences, is not the journal article. 

More than 95% of journals indexed in JCR are in English, disadvantaging scholars who publish in other languages (especially in the arts and humanities, e.g., linguistics). 

According to Clarivate Analytics, "the larger the number of previously published articles, the more often a journal will be cited," which means that emerging, newer journals are disadvantaged. 

This metric is not normalized, which means it cannot be compared across disciplines. Citation practices differ by field, with the most citation frequency and density in the life sciences and the least in the humanities. 

Definition SNIP measures the average citation impact of the publications of a journal. Unlike the well-known Journal Impact Factor, SNIP corrects for differences in citation practices between scientific fields, thereby allowing for more accurate between-field comparisons of citation impact.

Calculation

SNIP = RIP / DCP

Raw Impact per Paper (RIP)

  • RIP is the average number of times the journal's publications in the three preceding years were cited in the year of analysis. It reflects the average citation impact of the journal, without correcting for differences in citation practices between scientific fields. 

Database Citation Potential (DCP)

  • DCP value of all journals in the database equals one. Basically, it's:

    the number of publications in the subject field of the journal
          divided by
    the average number of active references in the journal's subject field. 

  • Essentially, the longer the reference list of a citing publication, the lower the value of a citation originating from that publication. 

For example, Journal X SNIP value for 2017:

(1000 citations received in 2017 divided by 100 publications between 2014-2016) (RIP)

divided by

 5 as the calculated average number of active references listed in publications belonging to the Journal X's subject field (DCP)

So, it looks like: (1000 / 100) / 5 = 2. This SNIP value for Journal X is performing twice as well as expected. A SNIP value of 1 is performing as expected. A SNIP value of 0.5 is performing half as well as expected. 

Citations in the DCP calculation are normalized in order to correct for differences in citation practices between scientific fields. A detailed explanation is offered in the CWTS scientific paper

Data sources

Available at CWTS Journal Indicators, a non-proprietary database maintained by CWTS at Leiden; uses Scopus data to calculate SNIP as well as (the number of publications), IPP (the impact per publication, similar to the Impact Factor), and the % of self citations. 

Only publications that are classified as article, conference paper, or review in Scopus are considered. 

Special type of sources are not included in the SNIP calculation (e.g., trade journals, scientific magazines, scientific journals with a strong national focus), which means that over 13,000 journals were eliminated from SNIP calculations in the Scopus database.

Also see Journal Indicators Methodology section on how the indicators are calculated, stability intervals, and guidelines for use / interpretation. 

Platforms /
Databases

CWTS Journal Indicators

Also integrated into other platforms by institutions and data providers.

At Virginia Tech, SNIP is integrated into Elements when publications are added to faculty members' profiles. 

Appropriate
uses

Useful to compare the influence and citation impact of journals across disciplines

Can also be useful for library collection development decisions.

Limitations and
cautionary uses

Does not distinguish between document types, so journals that publish a substantial number of review articles (which tend to be cited substantially more frequently) have higher SNIP values. 

Does not correct for journal self citations, but the percentage of self citations is reported as a separate indicator on CWTS Journal Indicators. 

Less reliable for smaller journals that have a limited number of publications.

Not very representative of the citation impact of individual publications, because it does not account for the skewness of citation distributions (e.g., small number of publications receiving a high number of citations). 

Sensitive to outliers; important to take into consideration the stability interval (published alongside SNIP in CWTS Journal Indicators). The wider the stability interval, the less reliable the SNIP value. 

For more information, visit the CWTS Journal Indicators Methodology.

Can Apply To Journal articles, typically in peer-reviewed publications
Metric Definition The percentage of manuscripts accepted for publication, compared to all manuscripts submitted.
Metric Calculation The percentage is calculated by dividing the number of manuscripts accepted for publication in a given year by the number of manuscripts submitted in that same year.
Data Sources Journal editors and publishers
Appropriate Use Cases The acceptance rate for a journal is dependent upon the relative demand for publishing in a particular journal, the peer review processes in place, the mix of invited and unsolicited submissions, and time to publication, among others . As such, it may be a proxy for perceived prestige and demand as compared to availability.
Limitations Many factors unrelated to quality can impact the acceptance rate for any particular journal. Sugimoto et al (2013) found statistically significant differences in article acceptance rates related to discipline, country affiliation of the editor, and number of reviewers per article. Acceptance rates were negatively correlated with citation-based indicators and positively correlated with journal age. Open access journals had statistically significantly higher acceptance rates than subscription only journals.
Inappropriate Use Cases The acceptance rate should not be used as a measure of the quality of a particular manuscript. Manuscript rejection may result from other factors such as a mismatch between the journal’s focus, audience, or format and that of the manuscript. Lower acceptance rates should not be assumed to be the result of higher standards in peer review, according to Haensly et al (2008). Acceptance rate should not be used as a comparative metric across fields or disciplines, according to Haensly et al (2008) and Sugimoto et al (2013).
Available Metric Sources Journal editors, Journal websites, Cabell’s Directories of Publishing Opportunities, and the Modern Language Association Directory of Periodicals
Transparency The data underlying acceptance rates are proprietary. Although some journals make their acceptance rate publicly available, many do not.
Website n/a
Timeframe Varies

This table is taken directly from the Metrics Toolkit, CC BY.  

Coming soon - stay tuned!

Coming soon - stay tuned!