This is something I’m going to try and do every Friday: take a topic on OA and then try and encourage some discussion via the usual outlets (Twitter, Facebook and our own comments section below).
Today’s discussion concerns whether or not the new Creative Commons licence should get rid of proprietary licences? I mentioned my position in the previous post, but it’s probably best if you just answer the question below before reading my opinions (I don’t want to be seen as having influenced you… Assuming I have such sway ). To provide some context I’ll outline the four options available for voting in the poll:
Non-Commercial: Licensed works are free to use, share and remix with attribution, but does not permit commercial use of the original work.
No-Derivatives: Licensed works are free to use and share with attribution, but does not permit derivative works from the original.
Non-Commercial and No-Derivatives: Does not permit any commercial use or derivatives of the original work.
No Proprietary Licences: Permits all uses of original work, as long as it is attributed to the original author (Note: attribution is in all six licences which is why I’ve mentioned it).
Note that I didn’t provide all of the possible licences under Creative Commons because I wanted to focus on the proprietary licences rather than CC usage. If you’re in a dire need to be asked such a question, then you should probably head over to the Directory of Open Access Books (DOAB) and fill in their survey.
With the Creative Commons 4.0 licence just on the horizon, there have been some questions raised about the use of proprietary licences. Perhaps the most comprehensive trashing comes from the freeculture.org article Stop the inclusion of proprietary licenses in Creative Commons 4.0. Their main gripe with CC is the use of NonCommercial (NC) and NoDerivatives (ND):
Neither of them provide better protection against misappropriation than free culture licenses. The ND clause survives on the idea that rightsholders would not otherwise be able protect their reputation or preserve the integrity of their work, but all these fears about allowing derivatives are either permitted by fair use anyway or already protected by free licenses. The NC clause is vague and survives entirely on two even more misinformed ideas. First is rightsholders’ fear of giving up their copy monopolies on commercial use, but what would be considered commercial use is necessarily ambiguous. Is distributing the file on a website which profits from ads a commercial use? Where is the line drawn between commercial and non-commercial use? In the end, it really isn’t. It does not increase the potential profit from work and it does not provide any better protection than than Copyleft does (using the ShareAlike clause on its own, which is a free culture license).
The second idea is the misconception that NC is anti-property or anti-privatization. This comes from the name NonCommercial which implies a Good Thing (non-profit), but its function is counter-intuitive and completely antithetical to free culture (it retains a commercial monopoly on the work). That is what it comes down to. The NC clause is actually the closest to traditional “all rights reserved” copyright because it treats creative and intellectual expressions as private property. Maintaining commercial monopolies on cultural works only enables middlemen to continue enforcing outdated business models and the restrictions they depend on. We can only evolve beyond that if we abandon commercial monopolies, eliminating the possibility of middlemen amassing control over vast pools of our culture.
Most importantly, though, is that both clauses do not actually contribute to a shared commons. They oppose it. The fact that the ND clause prevents cultural participants from building upon works should be a clear reason to eliminate it from the Creative Commons license set. The ND clause is already the least popular, and discouraging remixing is obviously contrary to a free culture. The NonCommercial clause, on the other hand, is even more problematic because it is not so obvious in its proprietary nature. While it has always been a popular clause, its use has been in slow and steady decline.
Practically, the NC clause only functions to cause problems for collaborative and remixed projects. It prevents them from being able to fund themselves and locks them into a proprietary license forever. For example, if Wikipedia were under a NC license, it would be impossible to sell printed or CD copies of Wikipedia and reach communities without internet access because every single editor of Wikipedia would need to give permission for their work to be sold. The project would need to survive off of donations (which Wikipedia has proven possible), but this is much more difficult and completely unreasonable for almost all projects, especially for physical copies. Retaining support for NC and ND in CC 4.0 would give them much more weight, making it extremely difficult to retire them later, and continue to feed the fears that nurture a permission culture.
My own thoughts on these two licences are mixed. I find the concept of No Derivatives problematic because its only purpose seems to be stifling innovation. I’m slightly less convinced by the black and white arguments made against the Non Commercial clause. As a friend pointed out, he finds the NC clause useful exactly because it protects him from having others make a financial profit from his work. Yes, there are problems here, as outlined above, but I think in the case of NC, we should strive for clarification rather than necessarily getting rid of the clause. Ultimately, I’m quite happy having diversity in licences, exactly for the reason that someone can tailor them to suit their specific needs. Still, much work needs to be done on making the licences clear so that users know exactly what they are getting themselves into. The case to be made is for educating users in particular licences, rather than lobbying for certain elements to be removed.
I agree with one of the sentiments of the talk: that important content need not be restricted to books. Having been a serious blogger for at least six years now, I would make the claim that some of my most important content is to be found online. Still, I don’t have many academic articles to my name, and there certainly aren’t any books on the horizon. But even for more seasoned academics the reality is started to get somewhat blurred in terms of the quality in output.
For the other aspect of the talk, I agree there is considerable promise in blurring the lines between the Internet and books. That said, for all the advances facing us, I’m not convinced by the great blurring that is apparently upon us. For one thing, I find interactivity to be a toy, and even though it looks cool, a lot of it doesn’t really offer any value-add to the material I’m reading. Even in the cases referred to in the video, we are still faced with items that appear to be more of distraction and, like 3D-TV, somewhat of a fad. I suspect in years to come there will still be a considerable preference for the simplicity offered by text on a page. Just look at the wonderful Readability. In short, I think there’s a reason why eBooks have been phenomenally successful, and it is because they’ve taken on the characteristics of traditional books.
Vestigial behaviours and practices associated with reading aren’t going to die out any time soon. This includes having certain reading materials structured in a certain way. If you look at the way in which reading technology has evolved over the last few years, then it has not been strictly a one-way street toward making content more adaptable for use on the Internet (the video makes a similar point). Instead, the technology has adapted to become more like the traditional book. This makes intuitive sense: it’s easier for technology to become more accessible by changing it to suit existing practices and behaviours, than it is for us to change books to suit existing technology. Cultural products that have survived for a long time are products that work well.
Old is good and adaptable; it’s unlikely to be disappearing any time soon.
I personally find the current use of copyright to be abhorrent. Having studied cultural evolution for the past five or so years, some of our greatest cultural endeavours and products stem from, to use Matt Ridley’s apt phrasing, ideas having sex. This culture of remixing has its roots in the very early stages of humanity and it essentially fuelled our rise to global dominance (whether you consider that a good or a bad thing is another question). So, on that note, I urge to watch Kirby Ferguson’s video on Embrace the Remix. In short: nothing is original and even our most celebrated creators borrow, steal and transform. Everything is a remix.
Scientific publishers are backing an initiative to encourage authors of high-profile research papers to get their results replicated by independent labs. Validation studies will earn authors a certificate and a second publication, and will save other researchers from basing their work on faulty results.
John Ioannidis, an epidemiologist at Stanford University in California, is on the initiative’s scientific advisory board. He expects only authors of high-profile papers to submit their work to extra scrutiny, and says that the project could help the scientific community to recognize experimental design flaws. “A pilot like this could tell us what we could do better,” he adds.
Besides companies like Scientific Exchange, there’s also a huge opportunity here for publishers to offer this service as part of the process in publishing a scientific paper. One of the messages I’ve tried to get across in this blog is the need for academic publishers to think outside the box if they are going to survive. As it currently stands, publishers primarily act as a middleman between authors and peer review, so why can’t they go one step further and offer the opportunity for other labs to independently test the results?
The link above is to a very useful and comprehensive step-by-step guide on how to start an open access journal. To give you a taster:
CrossRef, the registration organisation for DOIs on scholarly or research material, have various levels of fees. The reason for this is, once again, that they need ways to force publishers to keep their links up-to-date and to deposit material. Financial sanctions have proved the most effective way of doing this.
However, for the journal that is attempting to evade the fee-paying structures of commercial OA enterprises, this is little consolation. Never fear. The Open Access Scholarly Publishers Association has a deal with CrossRef for scholar-publisher members (that’s you, as an individual) that means that the OASPA will allow you to get a DOI prefix and assign up to 50 DOIs inclusive of their membership fee, which is a much more reasonable 75 euros. In my case, because I hadn’t started the journal at that point, I was signed up as a non-voting member of OASPA, but this certainly helped.
Timescale-wise, my application to OASPA took much longer than usual (I am told) because CrossRef were in the process of updating their member agreement. I signed up on the 16th April and was ready to go by the 7th July. So budget in three months.
I highly recommend reading through all five of his posts when you’ve got a spare moment. It’s really worth it irrespective of whether you plan on starting a journal or not.
Big News of the Day: Most of you in the Blogland are quite excited about Wiley’s announcement that they’re adopting the Creative Commons Attribution (CC-BY) licence for eight of their journals. It’s a good move and probably worthy of a pat on the back, but it was nice to see the brilliantly named blog, The Imaginary Journal of Poetic Economics, add some perspective on the issue (see here):
Next steps I would recommend to Wiley:
a commitment to publishing journals in a format suitable for data and/or text-mining and that will facilitate re-use of portions of content (for example, a CC-BY license on a locked-down PDF removes legal barriers to re-use, but not technical barriers)
a strengthened commitment to support for author self-archiving to allow authors more choice (not all authors have funding support for open access article processing fees)
prepare to compete for high quality publishing services at reasonable prices – consider a range of possible future competitors that includes PeerJ with prices starting at $99 for a lifetime of publishing
One of things I’m most excited about at the moment is Hypothes.is. The idea is to create a distributed, open-source platform that allows for the collaborative evaluation of information. It does this by taking a tried and tested model, community peer review, and uses this as the basis for creating a very dynamic commenting system. Dan Whaley, one of the minds behind Hypothes.is, recently gave a talk at the ievobio conference in Ottawa, where he discussed peer review and also offered the first public glimpse of the hypothes.is prototype (at around 57mins):
A decade or so ago you’d be forgiven for thinking that the monograph was in terminal decline. Just take the now 13-year-old words of Stanley Chodorow, who in his work, The Pace of Scholarship, the Scholarly Career, and the Monograph, claimed that the specialization of the academic monograph signalled “Its evolutionary track is at an end. It is heading for extinction”. Such strong words would’ve likely been the sentiments of many individuals at the time. Still, even before we were talking about online access, journals found themselves becoming increasingly dominant: both the growing number of journal titles and their steep price increases were gradually taking a larger slice of university library budgets. Indeed, for Chodorow, the only saving grace was the potential cost-reducing powers of the digitization:
If we are going to revive the monograph, we need to find a way to reduce its cost, do that individual scholars and libraries can acquire it. Today, it is obvious that only the electronic medium can do this. We will save the monograph if we provide a way to publish it on-line.
Well, now that we’re in the midst of mass digitization, it begs the question: is the academic monograph on the verge of being saved? There is cause to think so: rapid dissemination and cheaper publication costs are good reasons why we should have cause for celebration. Yet, despite all these advances, I’m still inclined to view them as bricolage, rather than the saviour of the monograph: that is, they are necessary pre-conditions, but by no means the crucial tipping point. These are what you would refer to as disruptive technologies: they preserve “the output the market desires, but reshuffles the underlying value chain in such a way that old players are sidelined and new ones emerge.”
What the monograph needs is a disruptive innovation. Here, the purchasing preferences of the market undergo change through improving a product or service in a manner that was not expected by current market leaders. Kent Anderson uses several good examples to make his point:
The mass-produced Model T changed the purchasing preferences of millions, despite being based on old technology. The disruptive innovation wasn’t the car — it was the assembly line. For music, the disruptive innovation wasn’t the MP3 or digital, but the iPod. In both cases, and many more, the market changed forever because the innovation made the market better. The e-book existed for years before the true disruptive innovation arrived — the Amazon Kindle, with its built-in Whispernet connection at no cost, low-cost books, and so forth. The market responded quickly.
In my previous post, I briefly mentioned Ethan Perlstein’s two posts on publishing in the era of open science (part one and part two). I just wanted to quickly highlight something that I haven’t paid much attention to, but perhaps should have, regarding the importance of post publication. In his post, Perlstein mentions the ratio of HTML to PDF downloads, and how he used this as proxy for the quality of readership (i.e. expert, academic readers versus non-expert readers):
Some of you may be miffed by my apparent conflation of site traffic and readership. Without more sophisticated analytics, I confess that it’s difficult to gauge the background of readers (scientists vs. non-scientists), or how much of the paper they’re actually reading (abstract vs. full text). However, the ratio of HTML views to PDF downloads (HTML/PDF) may be informative here. After the initial surge, HTML/PDF was 1 in 20 and remained there until the second surge, after which it fell to 1 in 30. If we assume that PDF downloads are a proxy for “expert” readership, then the second surge diluted quality readership. Conversely, the third surge lifted the ratio to 1 in 15, with as many as 20% of readers, many of whom were presumably academics, on Day 29 choosing to download a PDF version of the paper.
It certainly rings true for me that I only download the PDFs of papers I intend on referencing and making use of on more than one occasion. Still, my intentions sometimes verge on the side of completely unrealistic and I’ll download a large number of PDFs that don’t survive past a cursory glance. I guess, for me at least, there is a stronger driving motivation behind my downloading of PDFs: the cluttered design of HTML pages for many journals simply drives me insane. Rather than any in-built preference for reading in PDF, the need to get away from the HTML page is more than enough motivation for me to download a small document. Still, with better online reading and annotating tools being just around the corner, I wonder how long this variable will hold as a viable proxy?
In short, I certainly think more research needs to be done into readership, with one aim being to develop metrics that accurately capture expert and non-expert audiences.