Research Impact Metrics
- Home
- Publication Counts
- Journal Metrics
- Author Metrics & h-index
- Bibliometrics / Citations
- Altmetrics
- Usage Statistics
- Responsible Research Assessment
- Qualitative Evaluation
- Researcher Profiles This link opens in a new window
- Individual Impact & Engagement This link opens in a new window
- Research Impact & Intelligence Department This link opens in a new window
Research Impact Librarian
Journal-level Metrics
- Journal Impact Factor
- Source Normalized Impact per Paper (SNIP)
- Journal Acceptance Rate
- Cabell's Classification Index (CCI)
- Scimago Journal Rank
Definition | Clarivate Analytics states that the Journal Impact Factor (JIF) is "a measure of the frequency with which the 'average article' in a journal has been cited in a particular year or period. The annual JCR impact factor is a ratio between citations and recent citable items published." |
Calculation |
Journal X's 2005 impact factor =Citations in 2005 to all articles published by Journal X in 2003–2004divided byNumber of articles deemed to be “citable” by Clarivate Analytics that were published in Journal X in 2003–2004Source: https://doi.org/10.1371/journal.pmed.0030291 |
Data sources |
The JIF is a proprietary metric calculated by Clarivate Analytics and sourced from journals indexed in their database, Web of Science, which subsequently indexes the corresponding JIFs for those journals in Journal Citation Reports (JCR). |
Platforms /
|
For non-Virginia Tech users, see https://clarivate.com/webofsciencegroup/solutions/journal-citation-reports/) Also integrated into other platforms by institutions and data providers. At Virginia Tech, JIF is integrated into Elements when publications are added to faculty members' profiles. |
Appropriate
|
Useful to compare the influence and prestige of journals within a single discipline. Can also be useful for library collection development decisions. |
Limitations and
|
Most journals indexed in JCR are primarily in the STEM fields, and therefore, many academics, especially in the arts and humanities, are disadvantaged by the JIF when used as an indicator of performance or impact in their field. In addition, the primary research output for academics in the arts and humanities and to some extent, the social sciences, is not the journal article. More than 95% of journals indexed in JCR are in English, disadvantaging scholars who publish in other languages (especially in the arts and humanities, e.g., linguistics). According to Clarivate Analytics, "the larger the number of previously published articles, the more often a journal will be cited," which means that emerging, newer journals are disadvantaged. This metric is not normalized, which means it cannot be compared across disciplines. Citation practices differ by field, with the most citation frequency and density in the life sciences and the least in the humanities. |
Definition | SNIP measures the average citation impact of the publications of a journal. Unlike the well-known Journal Impact Factor, SNIP corrects for differences in citation practices between scientific fields, thereby allowing for more accurate between-field comparisons of citation impact. |
Calculation |
SNIP = RIP / DCP
Raw Impact per Paper (RIP)
Database Citation Potential (DCP)
For example, Journal X SNIP value for 2017: (1000 citations received in 2017 divided by 100 publications between 2014-2016) (RIP) divided by 5 as the calculated average number of active references listed in publications belonging to the Journal X's subject field (DCP) So, it looks like: (1000 / 100) / 5 = 2. This SNIP value for Journal X is performing twice as well as expected. A SNIP value of 1 is performing as expected. A SNIP value of 0.5 is performing half as well as expected. Citations in the DCP calculation are normalized in order to correct for differences in citation practices between scientific fields. A detailed explanation is offered in the CWTS scientific paper. |
Data sources |
Available at CWTS Journal Indicators, a non-proprietary database maintained by CWTS at Leiden; uses Scopus data to calculate SNIP as well as P (the number of publications), IPP (the impact per publication, similar to the Impact Factor), and the % of self citations. Only publications that are classified as article, conference paper, or review in Scopus are considered. Special type of sources are not included in the SNIP calculation (e.g., trade journals, scientific magazines, scientific journals with a strong national focus), which means that over 13,000 journals were eliminated from SNIP calculations in the Scopus database. Also see Journal Indicators Methodology section on how the indicators are calculated, stability intervals, and guidelines for use / interpretation. |
Platforms /
|
Also integrated into other platforms by institutions and data providers. At Virginia Tech, SNIP is integrated into Elements when publications are added to faculty members' profiles. |
Appropriate
|
Useful to compare the influence and citation impact of journals across disciplines. Can also be useful for library collection development decisions. |
Limitations and
|
Does not distinguish between document types, so journals that publish a substantial number of review articles (which tend to be cited substantially more frequently) have higher SNIP values. Does not correct for journal self citations, but the percentage of self citations is reported as a separate indicator on CWTS Journal Indicators. Less reliable for smaller journals that have a limited number of publications. Not very representative of the citation impact of individual publications, because it does not account for the skewness of citation distributions (e.g., small number of publications receiving a high number of citations). Sensitive to outliers; important to take into consideration the stability interval (published alongside SNIP in CWTS Journal Indicators). The wider the stability interval, the less reliable the SNIP value. For more information, visit the CWTS Journal Indicators Methodology. |
Can Apply To | Journal articles, typically in peer-reviewed publications |
Metric Definition | The percentage of manuscripts accepted for publication, compared to all manuscripts submitted. |
Metric Calculation | The percentage is calculated by dividing the number of manuscripts accepted for publication in a given year by the number of manuscripts submitted in that same year. |
Data Sources | Journal editors and publishers |
Appropriate Use Cases | The acceptance rate for a journal is dependent upon the relative demand for publishing in a particular journal, the peer review processes in place, the mix of invited and unsolicited submissions, and time to publication, among others . As such, it may be a proxy for perceived prestige and demand as compared to availability. |
Limitations | Many factors unrelated to quality can impact the acceptance rate for any particular journal. Sugimoto et al (2013) found statistically significant differences in article acceptance rates related to discipline, country affiliation of the editor, and number of reviewers per article. Acceptance rates were negatively correlated with citation-based indicators and positively correlated with journal age. Open access journals had statistically significantly higher acceptance rates than subscription only journals. |
Inappropriate Use Cases | The acceptance rate should not be used as a measure of the quality of a particular manuscript. Manuscript rejection may result from other factors such as a mismatch between the journal’s focus, audience, or format and that of the manuscript. Lower acceptance rates should not be assumed to be the result of higher standards in peer review, according to Haensly et al (2008). Acceptance rate should not be used as a comparative metric across fields or disciplines, according to Haensly et al (2008) and Sugimoto et al (2013). |
Available Metric Sources | Journal editors, Journal websites, Cabell’s Directories of Publishing Opportunities, and the Modern Language Association Directory of Periodicals |
Transparency | The data underlying acceptance rates are proprietary. Although some journals make their acceptance rate publicly available, many do not. |
Website | n/a |
Timeframe | Varies |
This table is taken directly from the Metrics Toolkit, CC BY.
Definition |
According to Cabell's, the CCI (Cabells Classification Index) measures a journal’s relevance and influence over time relative to other journals within a discipline. |
Calculation |
The CCI is calculated using the average citation rate across three years and standardized in a discipline or topic. A journal can have multiple CCIs if it encompasses multiple disciplines or multiple topics in the disciplines. |
Data sources |
It uses Scopus as its data source where available, to measure influence and quality in a subject area. |
Platforms /
|
It is only available on Cabell's Journalytics. |
Appropriate
|
It is useful to compare the influence and prestige of journals within a single discipline. Can also be useful for library collection development decisions. |
Limitations and
|
Imprecise categorization can be misleading to authors; certain journals are classified in disciplines that are not close to what their authors would consider their discipline. As this is a journal-level metric, it is inappropriate to evaluate individual articles based on this metric alone. Emerging journals will likely have lower CCIs, but it does not mean they are lower quality or have lower standards for publishing. As Scopus is the source of data for CCI, not all journals indexed in Cabell's will have CCIs. In addition, the literature in Scopus is mainly in English-language, Global North, STEM fields, and journal-content. It has better coverage in the social sciences and engineering than the Web of Science though. |
Definition |
The SCImago Journal Rank (SJR) is intended to be a measure of a journal's prestige. |
Calculation |
The SJR (or, more specifically, the SJR2, which is the newer version of the SJR) is calculated by taking the weighted number of citations in a given year to citable publications published in the journal within the 3 preceding years, divided by the total number of citable publications published in the journal within the 3 preceding years. Citations from publications published in more prestigious journals will receive greater weight than those from less prestigious journals, with "prestige" value being dependent on field. The SJR additionally takes into account journal "closeness" via co-citation networks, with citations from "closer" journals receiving greater weight. |
Data sources |
Scopus is the data source for SJR. |
Platforms /
|
Scopus (select 'Sources' from the top-right corner to search) and SCImago Journal and Country Rank display the SJR. |
Appropriate
|
Comparing journals, specifically within fields, to determine their prestige and influence. |
Limitations and
|
May be skewed by citation outliers (e.g., a few highly cited articles) May disadvantage interdisciplinary journals (e.g., citations from within the journal's co-citation network are weighted higher) It correlates with the Journal Impact Factor, and thus, it is redundant and similar to the JIF's meaning Highly complex calculation and difficult to replicate, especially because of the large dataset required to calculate it. |
- Last Updated: Feb 10, 2025 3:01 PM
- URL: https://guides.lib.vt.edu/research-metrics
- Print Page