About site

Your guide to Open Access publishing and Open Science

subscribe Twitter RSS

Solving the Positive Results Bias

positive-results

July 30, 2012

One of the biggest problems facing science is that it’s done by us mere humans. We’re highly fallible and, as a result, science is vulnerable to our numerous list of biases. To some extent the scientific method, as a collective activity, has gradually evolved to shield itself against these individual-level biases. For instance, the notion of generating and testing hypotheses through a standardised set of methodological procedures, allows us to bypass the reliance on folk wisdom and human intuition. This is most evident in scientific achievements that are subversive to common beliefs and generate completely counter-intuitive explanations.

Still, the scientific process has plenty of room for improvement, with scientists coming equipped with problematic dispositions such as the confirmation bias: here, there exists a tendency, mostly at the subconscious level, for an individual to confirm their expectations and the hypotheses they test. Writ large the confirmation bias yields one clear consequence in the excess of reported positive results. It’s not just researchers who are to blame — editors and pharmaceutical companies are also implicated in this pressure for interesting, profitable and positive results at the expense of the much maligned negative.

{focus_keyword} Solving the Positive Results Bias replication graph

Many tools (e.g. funnel plots) and publications (e.g. journal of negative results) exist that attempt to solve the positive results bias. The fact of the matter is that a large number of published research findings are false. This is especially relevant for the softer sciences, where fields such as psychology and economics are approximately 5 times higher among papers in reporting a positive result than, say, Space Science (Fanelli, 2010). Ioannidis (2005) offered six corollaries about the probability that a research finding is indeed true:

  • Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
  • Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
  • Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
  • Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
  • Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
  • Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. 

An easy implementable solution to this would be to decouple the methodology from the results: that is, you (the researcher) simply publishes their background literature and methodology, where it will be scrutinised and churned over in a manner similar to post-publication peer review. If the peer-review stage is initially focused on just reviewing the methodology, then you remove any temptation to simply publish, or for that matter submit, on the basis of what results your study produced. Also, it offers a greater focus on independent-confirmation of results (as the methodology is there for anyone to test before the results are published) and protection from people ripping off your work (e.g. some researchers don’t seem to be too fond of citing ideas they found on blogs).

There’s an obvious problem of who in the hell would want to just publish their methodology? Part of the effort would be to move some of the glory from generating interesting results onto generating interesting ideas to test. Still, in the initial implementation, I’m sure there would be plenty of young academics who would find this approach useful – it will allow them to develop important methodological skills and help foster an approach to how science should be done. Also, and perhaps more importantly, this offers protection for ideas: I’m sure there are numerous instances where, even in the case of well-established academics, there simply isn’t enough time, or money, to test a particular hypothesis brewing in your mind. It would be great if you could just dedicate time to coming up with a really interesting idea that someone else, with complementary skills and resources, could potentially test. Below is a conceptual diagram I came up with to provide a basic outline:

{focus_keyword} Solving the Positive Results Bias positive results1

At stage one, the method and hypotheses would get published and undergo post-publication peer-review: here, the author will receive  feedback which allows them to revise their initial approach. To do this you would need a pretty sophisticated commenting system (see here and here). Following an initial round of peer-review, we reach stage two of the process: using the outlined methodology, researchers go out and independently test the hypotheses. Independent testing of results is important in situations such as Daryl Bem’s supposed demonstration of precognitive abilities. In this case, we had an instance of one positive result (see green line), but it later came to light that, following replications of the original study, the results came out negative (see red lines). In short, had the journal adopted my approach of decoupling methodology from results, then one positive result wouldn’t have led to a ridiculous amount of controversy.

Creative Commons License {focus_keyword} Solving the Positive Results Bias
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Reference

Fanelli D (2010). “Positive” results increase down the Hierarchy of the Sciences. PloS one, 5 (4) PMID: 20383332

Ioannidis JP (2005). Why most published research findings are false. PLoS medicine, 2 (8) PMID: 16060722

This entry was posted on July 30, 2012 by Kamil Mizera and tagged , , .

8 thoughts on “Solving the Positive Results Bias

    1. James Post author

      Hi Nigel, I just had a peek at your journal via Amazon and it looks interesting. Still, from what I can tell, it is tackling the idea through trying to create an incentive to publish negative results (which is slightly different from my idea — see above or my comment below).

      Thanks for your comment.

  1. RMS

    I just don’t see how open access will solve the positive results bias, which is very much real. It may, in fact, even make it worse. Why? Since most open access journal are, or will be (it appears, after recent news), Gold OA, which means author processing charges (APC) will apply. So who will pay money, up to several thousands of dollars/euros/pounds, to publish negative results? I imagine people will wait until they have very good, and positive, results before publishing. Or they will make data more “rosy” to ensure their publication is worth the money spent. So, unless OA journals become free, or more free OA journals based on negative results come around, the problem will remain or worsen.

    1. James Post author

      First of all, maybe we should dispense with the notion that you actually read my post (where do I even mention open access?). The central premise of my post was that we should decouple methodology from results. What does this mean? Well, irrespective of whether said publication was green, gold or cobalt, it means that we would publish the hypotheses and methodology first. That is, a paper in my hypothetical journal or online database would initially not have any results. The results part of the process would be left up to future researchers to come along and independently test (this may or may not include the original author of the methodology).

      In short, the question you should have asked is: who will pay money, up to several thousands of dollars/euros/pounds, to publish just a methodology and no results? Well, even though you didn’t ask it, that is a good question for which I don’t really have a satisfactory answer. I’ve got a completely different conception of the role I see journals playing. I might do a follow up post on this.

      Don’t get me wrong: I actually agree with a lot of what you said. The reality of the situation is that there exists a pressure from journals and researchers to publish positive results. OA might very well make this problem worse if it remains tightly bound to how journals have previously been organised. I guess, in short, my post is about changing the organisational structure of journals. OA does provide a unique opportunity here as there are more opportunities for start-ups.

      Thanks for your comment ;-) .

  2. Pingback: What we don’t know can hurt us (Ben Goldacre) #TEDMED | Open Science

  3. Pingback: Independent Confirmation of Results and Academic Publishers: A Potential Opportunity? | Open Science

  4. Pingback: Quick data mining of my own library « Sylvain Deville

  5. Pingback: Publication Bias and Motivated Reasoning | From The Lab Bench

Leave a Reply