A recent qualitative study by Spezi et al. (2018), published in Open Access in the closed-access Journal of Documentation, argues that, despite the prevalent policy of checking for manuscript academic soundness or rigour only, Open Access mega-journals employ in their article review practices more stringent criteria, such as novelty or originality, that converge with those employed by traditional journals and their peer review routines.
A Blog Article by Pablo Markin.
To arrive at their findings, Spezi and colleagues (2018) have conducted semi-structured interviews with 31 editorial and management representatives of 16 publishing houses comprising Open Access mega-journals and the rest. The content analysis of the interview transcripts has allowed comparing responses from commercial, society and non-profit publishers, enabling them to assess the differences and performance of pre-publication and post-publication peer review models.
A major insight about the inner workings of mega journals at which Spezi et al. (2018) have arrived is that, whereas Open Access mega-journals are premised on serving as gateways for publication that meet minimum soundness criteria, their actual peer review practices deviate significantly from this stripped-down model. More specifically, the traditional peer review workflow can be generally expected to involve initial screening procedures, such as for plagiarism, and clarity-of-expression assessments, before other aspects of article manuscripts are taken into account. Thus, peer review checklists usually include originality or novelty, scientific importance, article scope, scholarly relevance and academic soundness as their criteria.
In this respect, Open Access journals that deploy scholarly review processes, e.g., checks, only defined by soundness or rigour, such as PeerJ, are expected to be compatible with the principles behind the post-publication review approach. This approach argues that, once published, as long as a scientific paper meets its basic requirements, it is up to the relevant scholarly community to decide whether the article has novelty, significance or relevance. However, this has not been the case, as, according to Spezi et al. (2018), widely accepted peer review practices favour the pre-publication model and tend to prevail at most of the investigated Open Access mega-journals.
In other words, as editors decide on which submissions are worthy for publication, they are inclined to go beyond the soundness-only model. Moreover, as referee reports are produced at Open Access mega-journals, their manuscript evaluation practices have been found by Spezi and others (2018) to be similar to those of their conventional counterparts. At the same time, given the lean organizational structures of often large-scale, Open Access journals, these may not necessarily apply their review practices in a consistent manner.
Consequently, whereas Open Access mega-journals may be perceived to be controversial due to their review procedures, this study shows that their editorial and referee practices tend to be indirectly guided by the evaluation factors that their more traditional counterparts evince. This has also been found to be behind a limited application of the post-publication review practices, despite the presence to technological means for that, e.g., comments and article-level impact or citation metric.
Minimal initial reviews that these practices entail are perceived as at variance with conventional quality assurance procedures, also due to the uncertainty about the validity and the relatively underdeveloped state of the post-publication assessment criteria.
Written by Pablo Markin
Edited by Kevin Holicka
Featured Image Credits: Office Desk Configuration, Honolulu, Hawaii, United States, January 16, 2008 | © Courtesy of Vicky Hamilton/Flickr.