Journal Impact Factor. For some it is one of the most important metrics of an article’s scientific credibility. For others, it is just an aberration that does not measure anything but “prestige”. One thing is for sure, the IF has become a great headache for the entire scientific community, which has no idea how to deal with it. The problem is that despite the strong conviction that the IF does not measure the value of a scientific article, it can still influence the reputation of researchers and have an influence on their careers.
This growing problem was reflected in the recently published “San Francisco Declaration on Research Assessment”, signed by 75 science organizations and 150 scientists, who expressed their opposition to judging scientists by the impact factor of the journal they publish in — rather than by the work they actually do. DORA states that:
“There is a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties. The Journal Impact Factor is frequently used as the primary parameter with which to compare the scientific output of individuals and institutions. The Journal Impact Factor, as calculated by Thomson Reuters, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. With that in mind, it is critical to understand that the Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment.”
Signatories are calling for:
- the need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;
- the need to assess research on its own merits rather than on the basis of the journal in which the research is published; and
- the need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).
This is of course only one of the many voices in this discussion that is sweeping through the scientific community. It is worth mentioning the research conducted by George A. Lozano, Vincent Larivière and Yves Gingras. In the paper, “The weakening relationship between the Impact Factor and papers’ citations in the digital age”, they show that since the 90s, when the Internet experienced rapid growth within the scientific community, the relationship between IF and citations began to weaken. According to Vincent Larivière:
“In 1990, 45% of the top 5% most cited articles were published in the top 5% highest impact factor journals. In 2009, this rate was only 36%. This means that the most cited articles are published less exclusively in high impact factor journals.”
Despite many negative voices and evidence, the impact factor is still doing well and is the one of the most influential criteria when establishing the prestige and career of a scientist. This situation has also had a very negative effect on the dissemination and development of Open Access publishing. Despite the fact that OA may help to scholars to gain a wider audience and a higher level of citation (I wrote about this here and here); they usually prefer to publish (or try to publish) in the traditional way, in journals with Impact Factors, since they are very often assessed with reference to the IF. The situation seems to be changing with the OA polices introduced on the governmental level. However, this leads to the situation in which mandatory Open Access clashes with the IF requirement.
The problem facing the scientific community is how to measure the scientific value of scholars’ and researchers’ work. This issue must be solved by the scientists themselves, and its solution will have an influence on the future development of science.
Thomson Reuters is expected to release the 2013 edition of the journal citation report (jcr) in the mid June