Digitization increasingly makes it possible to develop and apply scientific article-level impact metrics, rather than rely on journal-level measures of influence on the research community.
A Blog Article by Pablo Markin.
While journal impact factor metrics have been and continue to be used to assess the quality of publications that scholars publish, it appears that the primarily digital format in which most scholarly articles are published and the attendant article-level data that can be retrieved via the Internet can make it possible to devise article-level impact measures. A relatively recent example of this is the Relative Citation Ratio (RCR) that is calculated by Public Library of Science (PLoS) for the National Institute of Health, the United States, in the domain of medical research. The supporters of this article-level metric argue that they can increase the visibility of high-quality publications regardless of the impact factor ranking that the journals in which they appear have, as the chart below illustrates. Consequently, this can also assist emerging scientific journals, such as in developing countries, to improve their reputation, even when they operate with limited financial support.
Readily available computing power allows the application of the RCR based on the ratio between the target article citation rate and that of subsequent articles that cite it, which arguably permits controlling for the citation rates specific to particular scientific fields, while enabling cross-field comparability of this metric. A recent Open Access (OA) article that had inquired into the performance of the RCR as an alternative metric vis-à-vis expert opinions has not found significant differences between these, which indicates that journal-level metrics can serve as fine-grained and relatively adequate measures of the academic quality that published articles have as compared to journal-level metrics that may fail to capture the possibly variable quality of the articles that scholarly journals publish. Though the algorithms behind the RCR are considered to be more complex than more traditional impact metrics, both these procedures and underlying data are made freely accessible to the general public. While the developers and investigators of the RCR are careful to qualify the discriminating power of this metric for the assessment of article-level impact, it is an important step in the direction of deploying multiple alternative influence measures in the field of science.
Not incidentally, PLoS, as an OA journal publisher, has been championing this impact metric, as it apparently can significantly raise the profile of new scientific journals, such as OA ones, that may be managing to publish high-quality articles, while seeking to increase their impact factor measures. More significantly, funding bodies that actively support publishing in the OA format have been actively adopting the RCR to assess the impact of the research that they fund, such as in the form of article processing charges (APCs). As an alternative metric, the RCR has been creating a level playing field where the reputation of individual journals has no sway over the evaluation of the quality that scientific research output has.
This also explains the reason why large journal publishers increasingly offer alternative impact metrics, as they allow a sophisticated assessment of the quality of published research.
By Pablo Markin
Featured Image Credits: BCM112 hashtag network visualisation 24 05 2017 Week Twelve, May 24, 2017 | © Courtesy of Chris Moore.