Wednesday, December 30, 2015

The OA Interviews: Toma Susi, physicist, University of Vienna

Since the birth of the open access movement in 2002, demands for greater openness and transparency in the research process have both grown and broadened. 

Today there are calls not just for OA to research papers, but (amongst other things) to the underlying data, to peer review reports, and to lab notebooks. We have also seen a new term emerge to encompass these different trends: open science.
Toma Susi

In response to these developments, earlier this year the Research Ideas & Outcomes (RIO) Journal was launched. 

RIO’s mission is to open up the entire research cycle — by publishing project proposals, data, methods, workflows, software, project reports and research articles. These will all be made freely available on a single collaborative platform. 

And to complete the picture, RIO uses a transparent, open and public peer-review process. The goal: to “catalyse change in research communication by publishing ideas, proposals and outcomes in order to increase transparency, trust and efficiency of the whole research ecosystem.”

Importantly, RIO is not intended for scientists alone. It is seeking content from all areas of academic research, including science, technology, humanities and the social sciences.

Unsurprisingly perhaps, the first grant proposal made openly available on RIO (on 17th December) was published by a physicist — Finnish-born Toma Susi, who is based at the University of Vienna in Austria.

Susi’s proposal — which has already received funding from the Austrian Science Fund (FWF) — is for a project called “Heteroatom quantum corrals and nanoplasmonics in graphene” (HeQuCoG). This is focused on the controlled manipulation of matter on the scale of atoms.

More specifically, the aim is to “to create atomically precise structures consisting of silicon and phosphorus atoms embedded in the lattice of graphene using a combination of ion implantation, first principles modelling and electron microscopy.”

The research has no specific application in mind but, as Susi points out, if “we are able to control the composition of matter on the atomic scale with such precision, there are bound to be eventual uses for the technology.”

Below Susi answers some questions I put to him about his proposal, and his experience of publishing on RIO.

The interview begins …

RP: Can you start by saying what is new and different about the open access journal RIO, and why that is appealing to you?

TS: Personally, the whole idea of publishing all stages of the research cycle was something even I had not considered could or should be done. However, if one thinks about it objectively, in terms of an optimal way to advance science, it does make perfect sense. At the same time, as a working scientist, I can see how challenging a change of mind-set this will be… which makes me want to do what I can to support the effort. 

Thursday, December 17, 2015

The open access movement slips into closed mode

In October 2003, at a conference held by the Max Planck Society (MPG) and the European Cultural Heritage Online (ECHO) project, a document was drafted that came to be known as the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities.

More than 120 cultural and political organisations from around the world attended and the names of the signatories are openly available here.

Today the Berlin Declaration is held to be one of the keystone events of the open access movement — offering as it did a definition of open access, and calling as it did on all researchers to publish their work in accordance with the open principles outlined in the Declaration.

“In order to realize the vision of a global and accessible representation of knowledge,” the Declaration added, “the future Web has to be sustainable, interactive, and transparent.”

The word transparent is surely important here, and indeed the open access movement (not unsurprisingly) prides itself on openness and transparency. But as with anything that is precious, there is always the danger that openness and transparency can give way to secrecy and opaqueness.

By invitation only

There have been annual follow-up conferences to monitor implementation of the Berlin Declaration since 2003, and these have been held in various parts of the world — in March 2005, for instance, I attended Berlin 3, which that year took place in Southampton (and for which I wrote a report). The majority of these conferences, however, have been held in Germany, with the last two seeing a return to Berlin. This year’s event (Berlin 12) was held on December 8th and 9th at the Seminaris CampusHotel Berlin.

Of course, open access conferences and gatherings are two a penny today. But given its historical importance, the annual Berlin conference is viewed as a significant event in the OA calendar. It was particularly striking, therefore, that this year (unlike most OA conferences, and so far as I am aware all previous Berlin conferences) Berlin 12 was “by invitation only”.

Also unlike other open access conferences, there was no live streaming of Berlin 12, and no press passes were available. And although a Twitter hashtag was available for the conference, this generated very little in the way of tweets, with most in any case coming from people who were not actually present at the conference,  including a tweet from a Max Planck librarian complaining that no MPG librarians had been invited to the conference.

Why it was decided to make Berlin 12 a closed event is not clear. We do however know who gave presentations as the agenda is online, and this indicates that there were 14 presentations, 6 of which were given by German presenters (and 4 of these by Max Planck people). This is a surprising ratio given that the subsequent press release described Berlin 12 as an international conference. There also appears to have been a shortage of women presenters (see here, here, and here).

But who were the 90 delegates who attended the conference? That we do not know. When I emailed the organisers to ask for a copy of the delegate list my question initially fell on deaf ears. After a number of failed attempts, I contacted the Conference Chair Ulrich Pöschl.

Pöschl replied, “In analogy to most if not all of the many scholarly conferences and workshops I have attended, we are not planning a public release of the participants’ list. As usual, the participants of the meeting received a list of the pre-registered participants’ names and affiliations, and there is nothing secret about it. However, I see no basis for releasing the conference participants’ list to non-participants, as we have not asked the participants if they would agree to distributing or publicly listing their names (which is not trivial under German data protection laws; e.g., on the web pages of my institute, I can list my co-workers only if they explicitly agree to it).”

This contrasts, it has to be said, with Berlin 10 (held in South Africa), where the delegate list was made freely available online, and is still there. Moreover, the Berlin 10 delegate list can be sorted by country, by institution and by name. There is also a wealth of information about the conference on the home page here.

We could add that publishing the delegate list for open access conferences appears to be pretty standard practice — see here and here for instance.

However, is Pöschl right to say that there is a specific German problem when it comes to publishing delegate lists? I don’t know, but I note that the delegate list for the annual conference for the Marine Ingredients Organisation (IFFO) (which was held in Berlin in September) can be downloaded here.


Transparency aside, what was the outcome of the Berlin 12 meeting? When I asked Pöschl he explained, “As specified in the official news release from the conference, the advice and statements of the participants will be incorporated in the formulation of an ‘Expression of Interest’ that outlines the goal of transforming subscription journals to open access publishing and shall be released in early 2016”.

This points to the fact that the central theme of the conference was the transformation of subscription journals to Open Access, as outlined in a recent white paper by the Max Planck Digital Library. Essentially, the proposal is to “flip” all scholarly journals from a subscription model to an open access one — an approach that some have described as “magical thinking” and/or impractical (see, for instance, here, here and here).

The Expression of Interest will presumably be accompanied by a roadmap outlining how the proposal can be realised. Who will draft this roadmap and who will decide what it contains is not entirely clear. The conference press release says, “The key to this lies in the hands of the scientific institutions and their sponsors”, and as Pöschl told me, the advice and comments of delegates to Berlin 12 will be taken into account in producing the Expression of Interest. If that is right, should we not know exactly who the 90 delegates attending the conference were?

All in all, we must wonder why there was a need for all the secrecy that appears to have surrounded Berlin 12. And given this secrecy, perhaps we should be concerned that there is a danger the open access movement could become some kind of secret society in which a small self-selected group of unknown people make decisions and proposals intended to impact the entire global scholarly communication system?

Either way, what happened to the openness and transparency inherent in the Berlin Declaration?

In the spirit of that transparency I invite all those who attended the Berlin 12 to attach their name below (using the comment functionality), and if they feel so inspired to share their thoughts on whether they feel that open access conferences ought to be held in camera in the way Berlin 12 appears to have been.

Or is it wrong and/or naïve to think that open access implies openness and transparency in the decision making and processes involved in making open access a reality, as well as of research outputs?

Tuesday, December 01, 2015

Open Access, Almost-OA, OA Policies, and Institutional Repositories

Many words have been spilt over the relative merits of green and gold open access (OA). It is not my plan to rehearse these again right now. Rather, I want to explore four aspects of green OA. 

First, I want to discuss how many of the documents indexed in “open” repositories are in fact freely available, rather than on “dark deposit” or otherwise inaccessible. 

Second, I want to look at the so-called eprint request Button, a tool developed to allow readers to obtain copies of items held on dark deposit in repositories. 

Third, I want to look at some aspects of OA polices and the likely success of so-called IDOA policies.

Finally I want to speculate on possible futures for institutional repositories. 

However, I am splitting the text into two. The first two topics are covered in the attached pdf file; the second two will be covered in a follow-up piece I plan to publish at a later date.

To read the first part (a 16-page pdf) please click the link here.

Monday, November 16, 2015

The OA Interviews: ScienceOpen’s Alexander Grossmann

In his time, the founder and president of ScienceOpen, Alexander Grossmann, has sat on both sides of the scholarly publishing table. He started out as a researcher and lecturer, working variously at the Jülich Research Centre, the Max Planck Institute in Munich and the University of Tübingen.
Alexander Grossmann

Then in 2001 he reinvented himself as a publisher, working first at Wiley-Blackwell, and subsequently as managing director at Springer-Verlag GmbH in Vienna, and a vice president at De Gruyter.

An important moment for Grossmann came in 2008, when Springer acquired the open-access publisher BioMed Central from serial entrepreneur Vitek Tracz. Listening to a presentation on the purchase given at a management meeting by the company’s CEO Derk Haank, Grossmann immediately saw the logic of the move, and the imperatives of open access.

However, it was soon apparent to him that the publishing industry at large is not in a hurry to reinvent itself for an OA world, and certainly not if it means having to take hard decisions that could threaten the high profit levels that it has become accustomed to earning from journal publishing.

Speaking to me two years ago Grossmann put it this way: “[T]here is no publishing house which is either able or willing to consider the rigorous change in their business models which would be required to actively pursue an open access publishing concept.” 

And this remains his view today.

In 2013, therefore, Grossmann partnered with Boston-based entrepreneur and software developer Tibor Tscheke to found a for-profit OA venture called ScienceOpen. At the same time he took a post as professor of publishing management at the Leipzig University of Applied Sciences

A Q&A with Alexander can be downloaded as a pdf file here

Sunday, September 20, 2015

The Open Access Interviews: F1000 Founder Vitek Tracz

Vitek Tracz is a hero of the open access movement, and it is not hard to see why. Fifteen years ago he founded the world’s first for-profit OA publisher BioMed Central (BMC), and pioneered pay-to-publish gold OA. Instead of charging readers a downstream subscription fee, BMC levies an upfront article-processing charge, or APC. By doing so it is able to cover its costs at the time of publication, and so make the papers it publishes freely available on the Internet.[See the comment below the Q&A for clarification of this]. 

Many said Tracz’s approach would not work. But despite initial scepticism BMC eventually convinced other publishers that it had a sustainable business model, and so encouraged them to put their toes in the OA waters too. As such, OA advocates believe BMC was vital to the success of open access. As Peter Murray-Rust put it in 2010, “Without Vitek and BMC we would not have open access”.

Today Tracz has a new, more radical, mission, which he is pursuing with F1000.
Vitek Tracz

As always, I have written an introduction to the Q&A below with Vitek Tracz; as sometimes happens, the introduction turned out to be longer than readers might expect, or wish to read.

I have, therefore, put the introduction into a PDF file, which can be accessed by clicking on this link.

Those interested only in the Q&A need simply read on below. 

The Q&A begins ….

RP: As I understand it, F1000 now consists of three main services — F1000Research, F1000Prime, and F1000Workspace. In addition, I believe there is something called F1000 Specialists. Can you say something briefly about each of these services, and when they were launched?

VT: The newly launched F1000 ( is an integrated site combining three services: F1000Prime, F1000Research and F1000Workspace.  These services are built and supported through the active collaboration and participation of the largest high-level group of experts (over 11,000 and growing) from across biology and medicine, the F1000 Faculty. This consists of experienced leaders (Faculty Members) and talented young researchers (Associate Faculty Members, appointed by Faculty Members), in about equal numbers.

We started what is now called F1000Prime 13 years ago, which has become the largest and most comprehensive article-level quality assessment of biomedical literature: the F1000 Faculty identify those articles they find interesting in their daily work, rate them at one of the three levels of quality (all positive, the goal is to find the best articles) and write a short text explaining why the chosen article is interesting to them.

F1000Research, launched over 2 years ago, is an open science publishing platform that offers a completely new way of publishing research in biology and medicine: it uses immediate publication followed by transparent peer review, requires the underlying data to be shared, and encourages the publication of all research findings. It also now offers a platform to freely share scientific posters and slides.

Recently, we launched F1000Workspace, a comprehensive set of tools to help researchers write articles and grants, discover literature, manage references and reference libraries, and collaborate and prepare for publication.

The F1000 Specialists are not an external service; they are a growing group of young active supporters of our services who work with us in key institutions to support new users of our services and bring feedback that then contributes to future development decisions.

Tuesday, September 08, 2015

Predatory Publishing: A Modest Proposal

What many now refer to as predatory publishing first came to my attention 7 years ago, when I interviewed a publisher who — I had been told — was bombarding researchers with invitations to submit papers to, and sit on the editorial boards of, the hundreds of new OA journals it was launching. 

Since then I have undertaken a number of other such interviews, and with each interview the allegations have tended to become more worrying — e.g. that the publisher is levying article-processing charges but not actually sending papers out for review, that it is publishing junk science, that it is claiming to be a member of a publishing organisation when in reality it is not a member, that it is deliberately choosing journal titles that are the same, or very similar, to those of prestigious journals (or even directly cloning titles) in order to fool researchers into submitting papers to it etc. etc.

As the allegations became more serious I found myself repeatedly telling OA advocates that unless something was done to address the situation the movement would be confronted with a serious problem. But far too little has been done, and so the number of predatory publishers has continued to grow, and the cries of alarm are becoming more widespread.

Initially, the OA movement responded by saying that it was not a real problem because most so-called predatory journals had few if any papers in them, so there could be very few researchers affected.

Nevertheless, the number of publishers listed by Jeffrey Beall as “potential, possible, or probable predatory scholarly open-access publishers” has grown year by year. Since 2011 Beall’s list has increased from just 18 publishers to 693. One has to ask: why would there have been a 3,750% increase in this number if only a handful of people ever use the journals?

When it became harder to sweep the problem aside, OA advocates shifted ground, and began to argue that while there may be an issue it was only a problem for researchers in the developing world.

But is that response not simply another way of trying to suggest that there isn’t really a problem? Either way, why would the problem be any less important if the only victims were researchers based in the developing world?

In any case, I do not believe it to be an accurate characterisation. When a recent ABC Background Briefing examined the activities of one suspect publisher’s operations in Australia it concluded that there was a real problem down under. And Australia can hardly be described as a developing country.

Call me a sceptic

My own personal experience likewise suggests that the problem is somewhat more widespread and worrying than is generally acknowledged. I am regularly contacted by researchers who have fallen foul of dubious OA publishers. Yes, some of these researchers are based in the developing world, but a good number are based in the developed world, and some are even based in prestigious North American universities.

So call me a sceptic over claims that predatory publishing is not a serious issue, or that it is only impacting on those based in the developing world.

I’d also have to say that when I contact universities where those who have asked me for help are based, or big publishers whose journal titles have been used as bait to gull researchers into submitting to a predatory journal, I don’t get the feeling that there is much willingness to help the victims, to tackle the problem, or even to confront it.

For their part, OA advocates often also resort to arguing that subscription publishers are also predatory, so why does not Beall include them in his list as well? While this may be true, it is not particularly helpful, or relevant, in the context of seeking a solution to the problem of predatory OA journals.

So we are left with a growing problem but little effort being put into resolving it.

What we do have is a white list run by the Directory of Open Access Journals (DOAJ), and a blacklist run by a single individual (Jeffrey Beall).

One problem with the white list approach is that it can too easily become an exclusive club (excluding, say, journals based in the developing world). Moreover, the management of DOAJ has not been trouble free. Last year, for instance, it had to remove over 650 journals from its database after it decided it needed to tighten up its selection criteria and ask publishers to re-apply for inclusion. This was necessary because it had become clear that predatory journals were finding their way into the database. But as predatory journal buster John Bohannon has pointed out, the real problem is that DOAJ doesn’t have sufficient resources to be very effective. DOAJ is, he says, “fighting an uphill battle to identify all of literature’s ‘fake journals’.”

As a lone individual, the challenge for Beall is that much greater. It is no surprise therefore that he and his blacklist are frequently (and often bitterly) criticised for including publishers without sufficient evidence that they are indeed predatory. In any case, add OA advocates, Beall is “anti-OA”, and so his list should be completely ignored. Of course, it is always much easier to criticise someone who is trying to solve a problem than to do something about it yourself.

So what is the solution? Personally, I think the problem needs to be approached from a different direction.

What is surely relevant here is that in order to practise their trade predatory publishers depend on the co-operation of researchers, not least because they have to persuade a sufficient number to sit on their editorial boards in order to have any credibility. Without an editorial board a journal will struggle to attract many submissions.

This suggests that if a journal is predatory then all those researchers sitting on its editorial and advisory boards are to some extent also predatory, or at least they are conspiring in the publisher’s predatory behaviour. After all, if members of the editorial board of a journal that was engaging in predatory activity wanted to end or curtail that activity they could join together and resign, or threaten to resign.

Yes, I know some researchers have their names listed on journal editorial boards without their permission, or perhaps even knowledge. But the majority do so because it looks good on their CV. And in accepting an invitation to be associated with a journal most ask far too few questions about the publisher, and do far too little research into its activities, before saying yes. ABC found over 200 Australian researchers sitting on the editorial boards of just one predatory publisher. I am confident that most if not all of these agreed to sit on the boards.

So my question is this: Do these researchers not have some responsibility for any predatory behaviour the publisher engages in? Personally, I think the answer is yes!

What to do?

So what to do? Here I have a modest proposal. I don’t know whether it is practical or feasible, but I make the proposal anyway, if only to try and get people to think more seriously about solutions rather than excuses.

Why does the OA movement not create a database containing all the names of researchers who sit on the editorial and/or advisory boards of the publishers on Beall’s list, along with the names of the journals with which they are associated? Such a database could perhaps serve a number of purposes:

·         It could be used as a way of cross checking the appropriateness of a publisher/journal being listed on Beall’s site. It would at least surely focus minds, and hopefully encourage editorial boards to demonstrate (if they can) that their publisher/journal has been inappropriately placed on Beall’s list, or do something about it, if only by resigning. To help trigger this process researchers listed in the database could be contacted and told that their name was in it.

·         The database could help those thinking of submitting to a journal listed in it to more easily find and contact members of its editorial board, and before submitting ask them to personally vouch for the quality of the review process. If things then went wrong the submitting researcher could take the issue up with those board members s/he had contacted. There is nothing quite like personal recommendation, and the personal responsibility that accompanies it.

·         Researchers could also search on the database before agreeing to sit on an editorial board as part of a due diligence process. If the publisher/journal is listed in the database they could contact board members and ask them to personally vouch for the quality of the journal.

·         Researchers could search the database for their own names in order to establish whether they have been listed on an editorial board without their permission or knowledge.

·         Such a database could also quickly reveal how many journals on Beall’s list a particular researcher was associated with.

·         If editorial board members’ institutions were included in the database regular Top 10 lists could be published showing the institutions that had the greatest number of board members of journals in Beall’s list. Would that not also focus minds?

·         And if countries were included Top 10 lists of those could be published too.

I am sure people would also come up with other uses for such a database.

As I say, I don’t know how practical my proposal is, or whether anyone would be willing to take it on — but it is worth noting that ABC has already produced a list of board members of the journals of one publisher (although without the name of the relevant journal attached). This suggests that it is feasible. In fact, creating such a database would be a great candidate for a crowdsourcing project.  

Above all, such an initiative would make an important point: responsibility for predatory behaviour needs to be pushed back to the research community.

As Cameron Neylon points out, we need to move beyond the point of seeing researchers as “hapless victims”.  They are active agents in scholarly communication, and when the publishing practices of journals with which they are associated turn out to be inadequate or deceptive researchers ought to take responsibility, not just point the finger at rogue publishers.

In any case, it is surely past time for the research community to step up and grasp this nettle.

On a more general note, creating public databases of researchers on the editorial and advisory boards of journals (both those considered predatory and those not considered so) would make the point that agreeing to be associated with a journal comes with responsibilities, that it is not just a way of padding a CV. 

Saturday, August 15, 2015

When email marketing campaigns go awry: Q&A with Austin Jelcick of Cyagen Biosciences

Earlier this week I received an unsolicited email message from a company called Cyagen Biosciences inviting me to cite its “animal model services” in my scientific publications. By doing so, I was told, I could earn a financial reward of $100 or more. And since the amount would be based on the Impact Factor (IF) of the journal in question, the figure could be as high as $3,000 — were I, for instance, to cite Cyagen in Science (IF of 30). 
Austin Jelcick
The email surprised me for a number of reasons, not least because I am a journalist/blogger not a scientist. As such, I have never published a research paper in my life, and have no plans to do so. Moreover, I have only the vaguest idea of what an “animal model service” is, let alone how I would cite a company selling such a service in a scientific paper.

But mostly I was surprised that — at a time when thousands of researchers are calling for the abandonment of the Impact Factor — any company would want to tie its reputation to what is widely viewed as a sinking ship.

Curious as to why I had received such a message I searched on the Web for the company’s name, only to find that the link from Google to Cyagen’s home page delivered an error message.

Eventually locating an email address I contacted the company and asked if it could confirm that the message that I received had been sent on its behalf (It appeared to have come from a direct marketing company called Vertical Response).

The next day I received a reply from Cyagen product manager Austin Jelcick, who explained that I had received the message “as part of our marketing campaign which is currently seeking to raise awareness within the scientific community for our citation rewards program.”

As I was associated with “several blogs and articles related to open access journals and publishing” he added, it was assumed I would be interested in “our newly launched campaign to actively reward scientists for citing us in their materials and methods section while simultaneously encouraging them to submit into higher impact journals for increased awareness of both their study and our services offered.”

He added: “we felt that it would be beneficial to the researcher to receive a sort of ‘store credit’ for doing something they already must do as part of the publication process.”

Now intrigued, I invited Jelcick to do an email Q&A so that he could explain in more detail who the company was and why it had launched this campaign.

Very surprised by the offer

While I was swapping questions and answers with Jelcick by email the company’s campaign was starting to attract a good deal of commentary on the Web.

Yesterday, for instance, high profile physician and science writer Ben Goldacre published a blog post entitled, “So this company Cyagen is paying authors for citations in academic papers”.

Goldacre concluded, “Perhaps my gut reaction — that this feels dubious — is too puritanical. But I am certainly very surprised by the offer.”

Goldacre’s intervention also sparked a post over on Retraction Watch entitled, “Researchers, need $100? Just mention Cyagen in your paper!”

By now there was also a steady stream of comments from scientists on Twitter, expressing everything from puzzlement to outrage — see this for instance.

By late yesterday Cyagen clearly felt the need to make a public statement, which it did by means of a Q&A on Facebook, explaining: “Please find below some of the questions which were asked of us and our response which should help clear up the misunderstanding which has occurred about this promotion.”

The post went on to list seven questions and answers. What the company did not explain, however, is that these had been extracted from the interview I was still in the process of doing with Jelcick. That is, Cyagen did not cite me!

What has become clear is that the company believes that its email invitation has been misunderstood. Linking to the Facebook post from a comment on Goldacre’s blog, Jelcick went so far as to complain that Cyagen has become a victim of “some gross miscommunication”.

Richard Van Noorden appears to agree, saying on Twitter that the story has been “gleefully badly reported”. He explained: “you can’t get $100 by citing them. You get a discount voucher for their products”. He nevertheless suggests that Cyagen should withdraw the offer “pronto”.

It would seem that the mistake Cyagen made was to link its promotion to the much-maligned Impact Factor, which has become a red rag to many scientists. (See also the first comment below).

Anyway, below is the full list of 17 questions and answers that make up the interview I did with Jelcick. Some of the answers are a little repetitive, but given the confusion surrounding Cyagen’s email I have chosen not to edit them.

See what you think.

Wednesday, August 12, 2015

Open peer review at Collabra: Q&A with UC Press Director Alison Mudditt

Earlier this year University of California Press (UC Press) launched a new open access mega journal called Collabra. Initially focusing on three broad disciplinary areas (life and biomedical sciences, ecology and environmental science, and social and behavioural sciences), the journal will expand into other disciplines at a later date.

One of the distinctive features of Collabra is that its authors can choose to have the peer review reports signed by the reviewers and published alongside their papers, making them freely available for all to read — a process usually referred to as open peer review.

This contrasts with the traditional approach, where generally the reviewers names are not disclosed to the authors, the authors names are not disclosed to the reviewers, and the reviewers reports are not made public (commonly referred to as “double-blind” peer review).

Since Collabra is offering open peer review on a voluntary basis it remains unclear how many papers will be published in this way, but the signs are encouraging: the authors of the first paper published by Collabra opted for open peer review, as have the majority of authors whose papers are currently being processed by the publisher. Moreover, no one has yet refused to be involved because open peer review is an option, and no one has expressed a concern about it.

Collabra’s first paper—Implicit Preferences for Straight People over Lesbian Women and Gay Men Weakened from 2006 to 2013was published on 23rd July, and the reviewers’ reports can be found here.

So how does open peer review work in practice and what issues does it raise? To find out I emailed some questions to UC Press Director Alison Mudditt, whose answers are published below.
Alison Mudditt

RP: Presumably both the author and all the reviewers have to agree to open peer review before Collabra can publish the reviews? What percentage of the papers it publishes does Collabra expect will have the reviews published alongside?

AM: Authors choose open peer review as an option upon submission, so it is always their decision and as such they have already agreed in advance. Reviewers are made aware that authors have chosen this option and could opt to decline the review if they are unwilling to have their review comments made publicly available.

As a secondary option, whether or not open review has been chosen by the author, reviewers can sign their reviews. So it is possible to have reviewer comments be open, but the identity of the reviewer remain anonymous. Or, for that matter, have closed review, but reviewers sign their reviews. This is all described here.

With only one published article it is hard to project what the percentage will be, but at this point the majority of authors—for the papers currently being processed in our system—have opted for open review.

We are not targeting certain percentages, but rather want to put new options in front of people, especially given the numerous critiques of traditional closed peer review systems. This will not be for everyone, but we believe there’s much to be learned from experimentation with new models.

RP: Will Collabra make any effort to seek out reviewers who are comfortable with open peer review?

AM: The academic editors are selecting reviewers, and their top consideration will of course be the reviewer’s expertise for any given paper.

We make all the options and elements of Collabra clear when inviting external editors to be involved. Some editors are particularly interested in the open review option, and other editors have not commented on it.

No one has refused to be involved because it is an option or expressed a concern about this option.

RP: I assume that not all the correspondence is shared when Collabra publishes the reviews, and perhaps they might be edited in some way first (at least sometimes)? If so, what considerations/editorial rules are applied before making reviews public?

AM: Currently, the “open review file” is constituted by the reviewers’ comments on the reviewer form, the editor’s comments to the author based on the reviewers’ comments, and the author’s response—all as captured in our editorial system.

It is clear on the review form that there is an area for confidential comments to the editor that would not be shown to the author nor included in the openly available comments. But, for the remainder of the form, it is made clear that comments may be seen by the author and used without editing.

What is not currently shown is any earlier version of the paper and any comments or tracked changes on that. We will continue to monitor this policy and will consider other options, if it seems that useful or important elements are being omitted by not including earlier versions/changes.

And, obviously, if any discussion occurs outside of the editorial system between a reviewer and an editor, that will not be captured.

As regards editorial rules and considerations for any edits or omissions, we would discuss that with the editors as they came up. It is hard to say in advance what that might be (other than any information which is confidential and not even being revealed in the paper), so we’ll deal with that on a case by case basis.

Naturally, we would opt to be transparent about this happening should it occur beyond normal confidentiality considerations. For now we will see how it goes with it’s being clear on the form that comments may be used as written.

RP: Having started down this road (and so given concentrated thought to the matter), what would Collabra say were the pros and cons of open peer review?

AM: Speaking on behalf of UC Press (I’m not sure it’s appropriate to speak as “Collabra” in this context), we think that the inner workings of the peer review process are, purely and simply, interesting for any reader, but in particular for people who would like to see more transparency in this process.

There is clearly an argument to be made that making things open (rather than, for example, the double blind process) will help to reduce biases, problematic opinions, or hierarchical sensitivities that can affect the review process.

Equally importantly, open review starts to demonstrate the value added by the review process and to recognize the contributions of reviewers to scholarship

Finally, we all know that traditional peer review has not put a stop to whole disciplines being rocked by scandals of fabricated data and unquestioned results, and it’s possible that open peer review will actually help to improve the scholarly record.

On a related note, one of our other aims with Collabra is to get rid of the phrase “peer review lite” which has plagued the type of review that Collabra (and other OA titles) employs.

We characterize our review criterion as being “selective for credibility only”—checking for the scientific, methodological, and ethical rigor of a paper, and removing, as much as humanly possible, more subjective reviewing criteria for novelty or anticipated impact. Open reviews will support this mission—to show that there is nothing “lite” about this kind of review (and in fact, sometimes quite the opposite).

It’s too early for us to be able to identify specific problems with open peer review for Collabra, although we are aware of studies suggesting that it may be harder to get reviewers and it may lengthen the review time. Our limited experience so far does not support either of these concerns.

The other cons of open peer (as opposed to double blind) review are clearly to do with concerns about bias, the highly variable nature of peer review, and the additional costs it could impose on an already overtaxed system.

For example, a reviewer might be worried about openly and critically reviewing a more senior author and believe there could be a negative effect on her own career.

Our hope is that a more open system will improve the integrity of the peer review process, but the reality is that any system will be subject to the biases of human nature—we just think that this is more likely to be surfaced through greater transparency.

RP: Does Collabra think that there are occasions when open peer review is inappropriate? If so, when and why?

AM: Anything raised in peer review of a confidential nature which does not make it into the published article should be carefully removed from any open peer review comments that get published during open review.

That said, we (UC Press) are not really the drivers of how open peer review will evolve in Collabra or elsewhere. Since Collabra works only with external editors, editorial policies should emerge that are firmly based on the standards of each research community that publishes in Collabra.

If a community-driven majority standard emerged which stated that, in certain situations, open peer review was inappropriate, then we would respect such a decision.

RP: Are there any other learning points that have emerged as Collabra has sought to implement open peer review?

AM: It’s too early in the launch of Collabra to really be able to comment, although we have been pleasantly surprised at authors’ and reviewers’ willingness to consider the option of open peer review. That seems to be a great start for this concept.

An earlier Q&A with Alison Mudditt can be read here.

Wednesday, July 22, 2015

Emerald Group Publishing tests ZEN, increases prices: what does it mean?

When in July 2012 Research Councils UK (RCUK) announced its new open access (OA) policy it attracted considerable criticism.
Photo courtesy of swiftjetsum626
Initially this criticism was directed at RCUK’s stated preference for gold OA, which universities feared would have significant cost implications for them. In response, RCUK offered to provide additional funding to pay for gold OA, and agreed that green OA can be used instead of gold (although RCUK continues to stress that it “prefers” gold).

At the same time, however, the funder doubled the permissible embargo period for green OA to 12 months for STM journals and 24 months for HSS journals. This sparked a second round of criticism, with OA advocates complaining that RCUK had succumbed to publisher lobbying. The lengthened embargoes, they argued, would encourage those publishers without an embargo to introduce one, and those who already had an embargo to lengthen it.

There was logic in the criticism, since one rational response to the adjusted RCUK policy that profit-hungry publishers would be likely to make would be to seek to dissuade authors from embracing green OA (by imposing a long embargo before papers could be made freely available), while encouraging them to pick up the money RCUK had put on the table and pay to publish their papers gold OA instead (which would provide publishers with additional revenues).

It was therefore no great surprise when, in April 2013, Emerald Group Publishing — which until then had not had a green embargo — introduced one. Nor was it a surprise that it settled on the maximum permitted period allowed by RCUK of 24 months.

It was likewise no surprise that Emerald’s move also attracted criticism, not just from OA advocates but (in May of that year) from members of the House of Commons Business, Innovation and Skills (BIS) Committee, which was at the time conducting an inquiry into open access.

When taking evidence from the then Minister of State for Universities and Science David Willetts, for instance, the MP for Northampton South Brian Binley said “We have received recent reports of a major British publisher revising its open access policy to require embargoes of 24 months, where previously it had required immediate unembargoed deposit in a repository.” Binley went on to ask if Willetts could therefore please have someone contact the publisher and investigate the matter.

At the time I also contacted Emerald. I wanted to know the precise details of its new policy and to establish who would be impacted by it. This proved a little difficult, but it turned out that Emerald had introduced a “deposit without embargo if you wish, but not if you must” policy — an approach pioneered by Elsevier in 2011, but which it recently abandoned.

While the wording of the Emerald policy may have changed a little since it was introduced, at the time of writing it appeared to be the same in substance: authors are told that they can post the pre-print or post-print version of any article they have submitted to an Emerald journal onto their personal website or institutional repository “with no payment or embargo period” — unless the author is subject to an OA mandate, in which case a 24 month embargo applies.

Monday, June 22, 2015

HEFCE, Elsevier, the “copy request” button, and the future of open access

At the 2001 meeting that launched the Budapest Open Access Initiative (BOAI) the newly-fledged OA movement outlined two strategies for making the scholarly literature freely available. Later dubbed green OA and gold OA, these are now the two primary means of providing open access, and both types have been mandated by research funders in the UK. For instance, in 2013 Research Councils UK (RCUK) introduced an OA policy that favours gold open access, and in 2014 the Higher Education Funding Council for England (HEFCE) announced what is essentially a green OA policy, which will come into force next year. So how does the future for open access look?
Just to remind ourselves: With gold OA, researchers publish their papers in an open access journal and the publisher makes them freely available on the Internet as a natural part of the publication process. With green, OA researchers continue to publish in subscription journals, but then self-archive a version of their work in an open repository, either a central repository like PubMed Central, or an institutional repository. Meanwhile, the official version of the paper (version of record) remains behind a subscription paywall on the publisher’s site.

BOAI did not specify that OA journals should levy an article-processing charge (APC), but while OA advocates point out that most OA journals do not charge a fee, the reality (unless something changes) is that the pay-to-play model is set to dominate OA publishing.

Importantly, this means that although BOAI attendees assumed OA publishing would be less costly than traditional subscription method, use of the APC will make scholarly publishing more expensive, certainly during the transition to open access (which could last indefinitely).

And to the chagrin of OA advocates, much of the revenue generated by APCs is currently being sucked up by traditional publishers like Elsevier and Wiley, especially through the use of hybrid OA.

In reviewing the figures for 2013-2014, for instance, Wellcome’s Robert Kiley reported that Elsevier and Wiley “represent some 40% of our total APC spend, and are responsible for 35% of all Trust-funded papers published under the APC model.” (74% of the papers concerned were published as hybrid OA).

The story is similar at RCUK. As the Times Higher noted in April: “Publishers Elsevier and Wiley have each received about £2 million in article processing charges from 55 institutions as a result of RCUK’s open access policy.” In total RCUK paid out £10m, which is in addition to the subscription fees universities are already paying.

In effect, it would seem, traditional publishers are in the process of appropriating gold OA, and doing so in a way that will not only ensure they maintain their current profit levels, but that will likely increase them. And the profits of scholarly publishers, OA advocates argue, are already obscenely high.

Almost OA

But green OA advocates maintain that this is not inevitable, and have long argued that if implemented wisely, and strategically, open access can squeeze out the excessive costs of scholarly publishing, and so reduce publisher profits. However, they insist, this will only happen if researchers self-archive their subscription papers rather than opt for pay-to-publish. If researchers do this, they say, publishers will have to compete with repositories for access provision, and so will be compelled to downsize their operations. This in turn will put downward pressure on costs (and thus any publishing fees). Only at the point where these costs have fallen, argue green OA advocates, should researchers consider paying to publish.