Your guide to Open Access publishing and Open Science

Reputation is money in academic publishing or why Jeffrey Beall is wrong

Author: 10 Comments Share:

My answer to the recent article by Jeffrey Beall.

Jeffrey Beall, the librarian at the University of Colorado Denver who maintains the list of “predatory” open access publishers and journals, recently wrote his second (as far as I know) overt attack on the open access movement. Previously, Beall accused the movement of being anti-corporatist (which is obviously partially true, but also false, since the open access movement comprises of people from different backgrounds and of different political beliefs). Now he has changed his reasoning, but what has not changed is his negative attitude towards open access, which has lead him to present a selective and biased argument.

To understand all the problems raised by Beall, first, we have to examine the role of contemporary academic publishing. It serves mostly as a selection mechanism in a crowded field of research. Researchers need publishing output to get funding, promotions, jobs, and to get tenured. This triggers a lot of pathologies, but above all it makes the system incredibly competitive and fragmented.

Every researcher wants to publish in the best available journal, especially favouring the ones that can boost career options and further reputation. Almost every university, funding committee or ministry of science, has some rules in place to make publishing in some journals a better investment than publishing in others. It must be stressed that these rules have been created and are controlled by universities and funding bodies – and not by publishers. These rules are based on journal reputation, which is usually represented by some quantitative measures, with Impact Factor by Thomson Reuters being the most important, but not the only one.

The elite club

These rules make us all play the very same old game, meaning that for a journal editor or a publisher it pays off to publish top authors, to gain or maintain a good reputation, and at the same time, for an author, it pays off to publish in good journals. The main problem with this system are the ultra-selective, astronomically expensive journals, which are considered an ultimate authority, and which keep selectivity on artificially high level, so as not to lose the discrete charm of elitism (have a look here for further reading). And, as in the case of every quasi-monopolist, the biggest problem is that they are not infallible, and as evidence suggests they publish pseudo-science and bogus articles from time to time, which does not change the fact, that people (funders, tenure committees and media) trust them. Every serious journal publisher is trying to get to this elite club, arduously collecting different points in reputation rankings, as authors are obviously less eager to publish elsewhere.

Academia has strong regulatory mechanisms to fall back on. A publication in a “predatory” journal won’t pay off for an author, as the title of the publishing venue is the be-all and end-all for the majority of academic committees and competition among researchers is growing.

Let’s start 100 bogus journals today. What will it change?

Let’s get back to Beall’s article, which starts by describing the different types of open access. The gold path, which Bealls equates with the model based on Article Processing Charges is the main problem for him. However, as far as I am concerned, gold open access means simply that an article is openly available in a journal, on the publisher’s website, as opposed to a repository. This model can be based on different sources of funding, it may require authors to pay for being published or not. You can have a look at the DOAJ database to quickly see how many open access, peer-reviewed journals indexed there charge APCs.

Beall’s argument based on the premise that many bogus journals funded by APCs will publish just about anything, irrespective of its scientific value is true. However, there are also plenty of reputable, high-profile, open access journals that also charge APCs. And despite the fact, that there is probably fewer of these than those of extremely poor quality, they are much more important for the academic community.

According to Bo-Christer Björk, one of most prominent open access researchers, there are more than 10 000 very low quality, open access journals, which publish everything or almost everything they receive in submissions. Jeffrey Beall has specialised in flagging these journals, but it appears he missed the fact that many of them publish almost no content. And probably some of the existing content is as fake as the journals themselves, and generated by their “publishers” to make them appear more serious. Authors do not want to publish there and this is the reason why these journals are not a real problem.

Why are there so many journals of this kind? Because you do not need many financial resources to start a bogus academic journal nowadays. It is easy to create an amateurish website, to choose a random title, and generate some editorial text with several misspelled words, etc. I think I could on my own, without any help, start 100 new journals this week which Beall would have to add to his list. But will it be a threat to the academic world? I do not think so. I think that the Integrated Journal of British was made by desperados and for desperados. Since the investment was very low and in fact hosting is the only cost of this “journal”, alongside some extremely unqualified work. If 2 or 3 desperate authors from nowhere will pay several hundred dollars for APCs, the profit margin would be fair and the risk low. But I do not expect that the owners of such journals will become millionaires. Life is not that easy. And it is not a coincidence that almost all journals on Beall’s list are based in low-income countries.

APC is not corruption

Some of my colleagues at De Gruyter Open are editors of relatively new or very new open access journals and they know that getting the first submissions requires a lot of promotional work and renown names in the editorial teams. And if the first articles are not of the highest quality (preferably authored by known authors) the journals will not be able to survive the competitive market.

Running a profitable journal requires getting an Impact Factor or at least getting indexed by distinctive abstracting and indexing services. It is not an easy task and can only be achieved by publishing more and more articles that will consequently get cited in already established venues. Publishing pseudo-science will drive any unexperienced publisher out of business (established, reputable journals can publish bogus papers from time to time). The only feasible way of acquiring submissions from acknowledged researchers is through paying attention to quality control and stringent peer review of each published article. It is also worth mentioning that serious publishers introduce APCs to new journals only after they gain some recognizability, because it is hard to find real researchers who want to pay for publishing in unknown venues.

In the long term, it is also not worth publishing bad papers just to get APCs. It just doesn’t pay off, since reputation is money in this business. And even if we were to consider that APCs corrupt peer review, the traditional venues are not free from corruption either. Peter Suber pointed out some time ago that a lot of prestigious journals charge page fee, colour fee, etc., which all together very often amounts to a low APC in open access serials.

Is green open access about to blow up the system?

About green open access Beall writes:

A third variety of open-access publishing, often labeled as green open access, is based in academic libraries and is built on an oversimplification of scholarly publishing. In the green open-access model, authors upload postprints (the author’s last version of a paper that is submitted to a subscription publisher after peer review) to digital repositories, which make the content freely available. Many academic libraries now have such repositories for their faculty members and students; the green open-access movement is seeking to convert these repositories into scholarly publishing operations. The long-term goal of green open access is to accustom authors to uploading postprints to repositories in the hope that one day authors will skip scholarly publishers altogether. Despite sometimes onerous mandates, however, many authors are reluctant to submit their postprints to repositories. Moreover, the green open-access model mostly eliminates all the value added that scholarly publishers provide, such as copyediting and long-term digital preservation.

The low quality of the work often published under the gold and green open-access models provides startling evidence of the value of high-quality scholarly publishing.

The role of green open access is in fact totally different. Authors who use this means of research communication usually do not want to abolish journal publishing. Their actions are very much a result of the current publishing landscape, since this route is usually chosen by authors who want to publish in well-established journals, which do not offer the gold open access option. Virtually all journals allow authors to submit their works to repositories (usually after an embargo period). This gives an author an additional visibility and is generally accepted by publishers, because they still have the monopoly to sell an article to readers in the first months, when it is the most valuable and most in demand. Thus, a substantial part of green open access articles comes from from good quality, conventional journals, that are peer-reviewed. Repositories do not produce low quality science. They include pre-prints (article version before peer-review), but one can easily distinguish them.

Green open access has been here for a while and it does not seem to harm publishers, nor does it eliminate any services provided by them. It just creates an alternative (and usually delayed) circulation of papers. The main limitation of green open access is that publishers will not accept self-archiving of post-print without an embargo period, because it would make their business unprofitable. And authors who want to fully enjoy the benefits of open access usually do not like embargoes so much. So the main drawback of green open access is that it is not the best solution for any party.

That’s true, there is a “revolutionary” fraction of the open access movement, in favour of totally abolishing conventional publishing using green open access policies that eliminate embargo periods. This would probably make the conventional publishing model unprofitable and would make all publishers to shift toward the alternative options within gold open access publishing. But presently it seems that all open access policies respect the interests of publishers and do not cause any important changes in academic journals’ environment.

And what about the facts?

What is worrying, is that Beall disfigures facts. And I am wondering what the reason is behind his negligent attitude toward open access at large. He goes on to say: “The open-access movement is a coalition that aims to bring down the traditional scholarly publishing industry and replace it with voluntarism and server space subsidized by academic libraries and other nonprofits.” I think that academic publishing at the moment is paid by academic libraries and it is not going to change. The model based on Article Processing Charges (if it succeeds) will not change anything except from the fact that it will revert the current model. And that’s it. In both open access and the traditional model money goes from the university to the publisher, the publisher pays for all services including work that is necessary to make the paper accessible and discoverable.

When Beall writes that “Open access actually silences researchers in developing and middle-income countries, who often cannot afford the author fees required to publish in gold open-access journals.”, it seems like an another example of his bad will. Virtually every credible open access publisher has a fee waiving policy, which (very often) automatically abolishes author fees for researchers based in low-income countries. And this is aside from the fact that these authors may also choose open access journals that do not charge authors for publications, or still, use green option. There is nothing in the idea of open access that silences anybody.

The part about Creative Commons licenses might be also misleading. According to Beall:


Most open-access journals compel authors to sign away intellectual property rights upon publication, requiring that their content be released under the terms of a very loose Creative Commons license. Under this license, others can republish your work—even for profit—without asking for permission. They can create translations and adaptations, and they can reprint your work wherever they want, including in places that might offend you.

Well, indeed most journals indexed in DOAJ employ Creative Commons Attribution license which allows others to republish or translate their work. But it still requires attribution of the original author and offers protection against plagiarism, etc. There are also several other Creative Commons types of licenses, which are more restrictive. De Gruyter Open uses Creative Commons Non Commercial, Non Derivatives license, which allows readers to republish work only for non-commercial purposes and does not allow translations or adoptions.

Finally, when Beall suggests that open access publishers may be the main force behind the current debate about the limitations of peer review it sounds to me like a conspiracy theory. Those who complain most about peer review are authors, because they are very often losing time that is important for their careers as a result of rejections they consider unfair. This in turn ties in with the fact, I mentioned above, that a lot of journals are over-selective to maintain their prestige. And authors want to be published quickly, but in a famous journal. This is the main cause of tension around peer review. On the other hand, managing peer review is one of the key services that open access publishers offer to authors, so publishers would be reluctant to do away with it.

Do we need more education?

The only important point made by Beall in his text is about political activists trying to make use of the bogus journals.

Antinuclear activists, for example, are using predatory publishers to spread half-truths and false information about the effects of nuclear radiation. The pseudo-science gets published in journals that, to the general public, appear authentic, and the research is branded as science. Moreover, once political activists publish articles in open-access journals, they often seek coverage in the media, which sometimes publishes or broadcasts stories that promote the pseudo-scientific ideas of the political activists.

It is by the way interesting that Jeffrey Beall can judge what is a half-truth in the effects of nuclear radiation. I cannot, since I do not have degree in neither physics nor medicine and I will not try to write about things I know nothing or little about. Back to the point, this might be a problem, and I am curious how often popular media has repeated false information after a publication in very low quality journal, which has probably not been reviewed. If it occurs frequently, it is is a real challenge to the academic community to educate journalists to be more critical about science and pseudo-science.

Is open access a threat to us?

At the very end I would like to add one more thing about myself. I hold a PhD in sociology, which as I believe, allows me to understand a fair majority of academic papers in this subject area, and some from the general field of humanities and social sciences. It also gives me an understanding of the nuances of statistical analysis. I use these skills daily to read academic papers, both as part of my work at De Gruyter Open and beyond. Despite the fact that I do not live in the so-called Third World, I do not have regular access to subscription journals. I think that about 95% papers I read are open access. When I find an interesting paper on a publisher’s website, it is seldom published in gold open access. Usually it is paywalled, but I can find it’s free version anyway with Google Scholar. I also use Academia.edu and Arxiv.org to search for papers (on Arxiv.org there are plenty of quantitative studies on open access and academic publishing), and I have to say that some of non peer reviewed articles I find there are of poor quality, but they are just small percent. Generally, my work is much easier and I think also more effective, courtesy of open access. So, it is hard for me to understand why someone is paying so much attention to gibberish papers that probably nobody reads, instead of writing about all the important open access articles available on-line.

Image credit: Dick Daniels licensed under CC-BY-SA 3.0

Previous Article

It’s ok to be lazy with the Google Scholar Button

Next Article

Open access data and open educational resources help to save endangered languages

You may also like

10 Comments

  1. Interesting article! However I am not sure is it possible at all to measure quality of research paper in a quantitative way. This might be true that social pressure make fraudsters to send more papers to high IF journals, but the evidence is not enough for me to say so for sure. But, anyway, it is interesting.

      1. Yes, I did. The only measure you discuss that seems to be suitable to judge quality of papers in various fields of science is in my opinion the decline effect. But a lot of papers present findings that have never been evaluated by other researchers, thus it is hard to use this measure for every article.

        E.g. the statistical power is not a measure of a paper’s quality. There are important papers in my field that present weak correlations. Usually importance of the paper depends on its subject (what is correlated) and exactitude of methods used.

        The retraction rate has known limitations – not every gibberish paper is retracted, so high number of retractions is more the evidence of social control among journal’s readers than of low quality of a journal.

        With regard to “small but significant correlation between journal rank and future citation” maybe it is small, but still it is one of the dominant factors influencing citations. Have a look here:

        http://arxiv.org/abs/1412.4754

        So I think you might be right that high-rank journals publish more bad science, but I am not sure how to design a research that can verify this hypothesis using standardized measure .

        1. What? Statistical power is not an obvious measure for methodological quality? Say, my colleague happens to do the same experiments as I do here in my lab, but he decided to reach 80% power while I got a significant result at 20% power and published it in Nature right away. He finds something different and publishes it in the Andorran Quaterly Journal of Entomology (no offense to Andorra). Then his study isn’t better done than mine? If it later turns out my results were false positives and his were replicable, that wasn’t due to his work being better than mine?
          If this case is not a very obvious instance of different methodological quality, I’d really like to learn from you what would constitute such a difference! I, for one, would hire the other guy, rather than myself :-)

          Also, the quality of the crystallographic model is not an actual measure of quality? What does it measure then and why did the experts of the field call it ‘quality’? I, for one, would hire the guy/gal who published the models with the highest quality measure, rather than the guy who did sloppy work, but published it in Nature, because their boss slept with the editor.

          Overestimating the association of a gene with a trait in studies designed with insufficient sample size is not a measure of quality? Isn’t a study that is designed such that it matches actual effect size and is replicable, clearly better than one that isn’t replicable due to insufficient sample size? If this isn’t a quality difference, what is? I, for one, would hire the guy/gal who knows the appropriate sample size for their study, rather than the person who detected, say, a spectacular association of geneX with smoking, that then turned out to be very weak.

          Whether or not all criteria of evidence-based medicine are achieved is also not a measure of quality? If I design a sloppy medical experiment that misses most criteria, then the quality of this paper is the same as a carefully crafted one that meets all criteria with flying colors? I, for one, would hire the person who publishes papers that meet these criteria, as it indicates that they know what they are doing.

          You write:

          “E.g. the statistical power is not a measure of a paper’s quality. There are important papers in my field that present weak correlations. Usually importance of the paper depends on its subject (what is correlated) and exactitude of methods used.”
          Are we talking about importance or quality? Do you use importance and quality synonymously? In that case, I can tell the quality of a paper already before it’s published: if it cures cancer, it is very, very good. If it discovers life on Mars, it is a stellar paper. Cold fusion? Excellent work!
          Rather, I am of the opinion, that importance is completely irrelevant for a paper’s quality. At the time of publication, nobody knew that Einstein’s relativity would be important for GPS. Confusing quality with importance may be one of the most pernicious confusions in science.

          I also noticed, that you wrote: “I am not sure is it possible at all to measure quality of research paper in a quantitative way.”, but I never mentioned *quality*. I wrote that the papers published in ‘top’ journals are more *unreliable*. There are many reasons why articles may be unreliable, of which quality is only one. Fraud may be another. The quality of ‘top’ journals is lower, I think the data are quite clear on that – unless one uses a definition of quality that makes it synonymous with importance. What is also known (published after our study) is that the incidence of fraud is significantly higher in ‘top’ journals. It is not a necessary consequence, but it is also not unexpected that the journals with more fraud and lower quality are also forced to retract a higher fraction of their articles – which is also what one finds. Mind you: the *absolute* number of retractions is much, much higher in lower ranking journals (because there are many more). Only the (*relative*) retraction rate is higher in ‘top’ journals, indicating that most seriously flawed articles get retracted at some point, there are just disproportionately more of them in ‘top’ journals – relatively speaking. So there is good evidence that plenty of retractions are happening in lower ranking journals, to a larger extent than in top journals (in absolute numbers). There is also anecdotal observations that ‘top’ journals appear to be extraordinarily reluctant to retract. There is little to no direct evidence (other than absolute readership, essentially) that there is any difference in what you call “social control”. Do you know of any such studies? Without such evidence, a purported difference “social control” is just an ad-hoc attempt to dodge inconvenient implications.

          With regard to your citation of http://arxiv.org/abs/1412.4754, I have never heard of the venues they talk about in this paper and can’t say if there is any evidence that these venues publish any higher or lower quality work. I also think some of these venues are conferences, which might develop their very own reputation dynamics, compared to journal publishing. Different fields may very well have entirely different dynamics and outcome,s if the same methods were applied there. So what you say may well hold for this dataset that they have, but for the science field, where the most prestigious venues are Science or Nature, etc. (and not ACM SIGKDD, IEEE TKDE, or ACM WSDM), the venue factor is vanishingly small, on the order of one citation per year on average, IIRC.

  2. Thanks for this mostly factual correction of Jeffrey Beall’s misinformation on open access.

    However, it is not (entirely) true that “The main limitation of green open access is that publishers will not accept self-archiving of post-print without an embargo period, because it would make their business unprofitable”. Actually, there are quite a few publishers who allow unembargoed self-archiving of (non-OA) post-prints, albeit not in the final published format – but to be fair some allow even that. Apparently this does not drive them out of business. For example, SAGE (http://www.sagepub.com/oa/funding.cp) and Emerald (http://www.emeraldgrouppublishing.com/openaccess.htm) have such policies. It seems like publishers are aware of the fact that authors increaslingly compare such policies before submitting a paper. I find the SHERPA RoMEO tool very useful in that regard (http://www.sherpa.ac.uk/romeo/).

    Another point of criticism: the CC-BY-NC-ND license apparently used by De Gruyter Open is not deemed an open license by many (c) experts as it restricts a lot of possibilities for re-use (especially when you want to combine some piece published under this license it with other CC-licensed material; see https://wiki.creativecommons.org/Frequently_Asked_Questions#When_is_my_use_considered_an_adaptation.3F).

    1. You are right. Although both policies you have mentioned impose some restrictions. Sage and Emerland allow authors to publish post-print on their institution repository or personal website only. I also do not know any researches about authors’ attitude toward limitations of this kind nor about it possible impact on publishers output. But yeah, these are examples of liberal green open access policies.

      And with regard to CC-BY-NC-ND I know that some people do not consider it “open”. However some researches have shown that authors prefer more restrictive licenses:

      https://openscience.com/do-academic-authors-prefer-to-publish-their-work-under-more-or-less-restrictive-creative-commons-licenses/

      1. Thanks for your reply. The study you mention indeed shows that the scientific community (at least a non-representative sample from it) is undecided about whether to publish under more restrictive or more liberal CC licenses. Prior consent (for re-use, be it commercial or non-commercial) is likely to be an issue. It would have been very instructive to see the results broken down for age / # of years as a publishing author. My hypothesis, based on discussions with colleagues, would be that the longer you have worked in the pre-OA publishing system, the more you favor that system – I would love to see that one refuted ;-)
        Text and data mining is an important and underestimated topic in relation to CC licensing. With an estimated worldwide output of ~2 million articles per year, you might want your library to be able to make a good preselection for you…

  3. Pingback: Open Science

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.