Tuesday, August 29, 2017

The Open Access Interviews: Rusty Speidel, The Center for Open Science

The Center for Open Science (COS) has announced today that six new preprint services have launched using COS’ preprints platform, taking the number of such services to 14. 

The announcement comes at a time when we are seeing a rising tide of preprint servers being launched, both by for-profit and non-profit organisations – a development all the more remarkable given scholarly publishers’ historic opposition to preprint servers. Indeed, so antagonistic to such services have publishers been that until recently they were often able to stop them in their tracks. 

In 1999, for instance, fierce opposition to the E-BIOMED proposal mooted by the then director of the US National Institutes of Health Harold Varmus caused it to be stillborn.  

Publisher opposition also managed to bring to a halt an earlier initiative spearheaded by NIH administrator Errett Albritton. In the early 1960s, Albritton set up a series of Information Exchange Groups in different research areas to allow “memos” (documents) to be mutually shared. Many of these memos were preprints of papers later published in journals.

Albritton’s project was greeted with angry complaints and editorials from publishers – including one from Nature decrying what it called the “inaccessibility, impermanence, illiteracy, uneven equality [quality?], and lack of considered judgment” of the documents being shared via “Dr Allbritton’s print shop”. The death knell came in late 1966 when 13 biochemistry journals barred submissions of papers that had been shared as IEG memos.

Seen in this light, the physics preprint server arXiv, created in 1991 and now hugely popular, would appear to be an outlier.

The year the tide turned


But over the last five years or so, something significant seems to have changed. And the year the tide really turned was surely 2013. In February of that year, for instance, for-profit PeerJ launched a preprint service called PeerJ Preprints

And later that year, non-profit Cold Spring Harbor Laboratory (CSHL) launched a preprint server for the biological sciences called bioRxivRather than opposing bioRxiv, a number of biology journals responded by changing their policies on preprints, indicating that they do not now consider preprints to be a “prior publication”, and thus not subject to the Ingelfinger rule (which states that findings previously published elsewhere, in other media or in other journals, cannot be accepted). Elsewhere, a growing number of funders are changing their policies on the use of preprints, and now encouraging their use.

This has allowed bioRxiv to go from strength to strength. As of today, over 14,000 papers have been accepted by the preprint server, and growth appears to be exponential: the number of monthly submissions grew from more than 810 this March to more than 1,000 in July.

But perhaps the most interesting development of 2013 was the founding of the non-profit Center for Open Science. With funding from, amongst others, the philanthropic organisations The Laura and John Arnold Foundation and The Alfred P. Sloan Foundation, COS is building a range of services designed “to increase openness, integrity, and reproducibility of scientific research”.

At the heart of the COS project is the Open Science Framework (OSF). Speaking to me last year, COS executive director Brian Nosek explained that OSF consists of two main components – a back-end application framework and a front-end view (interface). “The back-end framework is an open-source, general set of tools and services that can be used to support virtually any service supporting the research lifecycle.”

One of the services offered is OSF Preprints. This allows for the creation of customised interfaces for specific disciplines, research topics, or other areas of common interest. As such, a community can post and share preprints via an interface configured to match its brand, editorial focus, licensing requirements, and taxonomy.

The new preprint servers announced by COS today include INA-Rxiv, the preprint server of Indonesia; LISSA, an open scholarly platform for library and information science; MindRxiv, a service for research on mind and contemplative practices; NutriXiv, a preprint service for the nutritional sciences; paleorXiv, a digital archive for Palaeontology; and SportRxiv, an open archive for sport and exercise-related research.

OSF preprint servers currently host more than 3,600 preprints, and COS anticipates that the number of customised servers running on its preprint platform will have risen to 20 by the end of the year. In addition, OSF uses SHARE to aggregate and index over two million search results from preprint providers hosted on other platforms such as arXiv, bioRxiv, and PeerJ. This means that searches can be conducted on the combined content of many different services.

As interest in preprints grows, the sharing of them on dedicated servers (this list – put together in January – is already out of date) looks set to become routine. Doubtless, some (perhaps more than some) of the new services will fail, or fade, over time, but preprints are surely now set to become an important part of the scholarly communication landscape and to provide a much-needed boost to the open access movement.

What better time to touch base with OSF? Today, therefore, I am publishing a Q&A with COS’ marketing Manager Rusty Speidel.

The interview begins …


RP: COS has just announced that six new preprint services are being launched using the OSF Preprints platform. Is there anything new or different about any of these services, or is it just that more communities are beginning to adopt the preprint habit? How many disciplinary communities are now committed to using the Open Science Framework (OSF) to run a preprint service?

RS: The main story is an expansion of preprints into more scholarly communities. There are now 14 preprint services using OSF, and there are more scheduled to be released. We expect that there could be 20 or more by the end of 2017.

One notable technical advancement is support for custom taxonomies so each service can determine its own disciplinary hierarchy. That is a feature addition accompanying the main story: the increasing diversity of communities that are starting to adopt this open platform for their services, and the pace with which they are coming in. These signal real change in attitudes towards open science, lock-in, and reclaiming control over research outputs.

RP: You will correct me if I am wrong, but I don’t think COS envisaged offering a dedicated preprint platform when it launched in 2013. Have you been surprised at the sudden interest in preprints? Why do you think they are now so fashionable?

RS: Preprints was an early priority for COS because it hits a sweet spot for opening scholarly communication. Preprints are widely used in a couple of disciplines, largely acceptable in the present system of scholarly communication, and provide two key benefits for open science – accelerating communication and open access.

In the broader strategy for COS, preprints is one interface to OSF at an important stage of the research lifecycle. The paper is written and scholars want others to read it. COS is working on more interfaces for OSF at other stages of the research lifecycle such as registries for the onset of research projects, repositories for archiving and discovering data, and meetings for sharing results in “real-time” before they are written for publication. 

By providing interfaces for different stages of the lifecycle on a single platform, we can connect and open the entire lifecycle of research.

Non-profit or for-profit?


RP: It is not just COS driving the current preprint surge; we are seeing a growing number of such services being launched, some offered by for-profit organisations, some by non-profit organisations like COS. One issue that inevitably arises here is the relative merits of for-profit and non-profit solutions. This is a particularly hot topic in the scholarly communication space in the wake of Elsevier’s announcement that it is acquiring bepress.

On one side of the debate, people argue that for-profit organisations are better at generating revenue streams, and so more sustainable in the long term. By contrast, they say, non-profit organisations have to constantly rush around in pursuit of new grants and funding opportunities. This inevitably means that their long-term sustainability is always precarious.

On the other hand, some argue that for-profit concerns in the scholarly communication space tend to overcharge their customers and so extort the research community. What are your views on this, and how confident can COS be, as a non-profit, that it will remain sustainable into the future?

RS: We believe that both for-profit and non-profit organizations can provide useful services for the research community. There is nothing wrong with making money, and sustainability is a key consideration in considering for-profit versus non-profit solutions. COS is a non-profit for a very important strategic reason:

We are building infrastructure to connect the entire research lifecycle and open all of scholarly research. If we were a for-profit, we would face a strong competing incentive to meet that aim – lock-in. The best monetization strategy for a services organization is making it super convenient for users to transition between one’s own services, and super inconvenient to get out of that ecosystem.

As a non-profit, we do not face the same lock-in pressures. We achieve our mission by getting researchers to share their data or make their papers open access, no matter what service they use. That’s why we connect OSF with figshare, Google Drive, GitHub, Dropbox, etc. Researchers should use the services that are best for them. We can stay focused on making it easy for researchers to use the services they like, and switch if others emerge that they like more.

Sustainability is an issue for non-profits and for-profits. Both must provide services that are effective enough to earn investment by whoever is paying. In this space, both could use the same monetization strategies – e.g., charging universities rather than researchers themselves.

The thing that is particularly challenging for both non-profits and for-profits is scaling their services. Once at scale, sustainability is much easier because the community is relying on the services and willing to invest in them. Prior to achieving scale, start-ups whether for-profit or non-profit, must convince investors/philanthropies that the goal is important and the plan is achievable. Large companies have a decided advantage in that they can use resources from other revenue streams as that loss-leader investment.

Is COS sustainable? We don’t know yet. We are scaling the services, and so are still in the zone of needing funders and philanthropists to invest in our mission and plan. If we achieve that investment and execute on our mission, then we believe that a transition to a sustainable, distributed cost model for maintaining public goods will be very achievable.

If we fail, we minimized risk for our users by putting aside $250,000 of our initial funding to ensure the preservation of all the content hosted by our services. At present costs and projections, that provides for 50+ years of preservation support.

Distributed or centralized?


RP: Another issue that always arises with Internet-based services is whether a distributed or a centralised approach is best. I was not surprised, therefore, when in February a number of funders called for a Centralised Service for life science preprints and asked ASAPbio to put out a Request for Applications (RFA) to create such a service. COS was one of the organisations that responded to the RFA. However, in April the Chan Zuckerberg Initiative announced that it will be supporting bioRxiv. As a result, ASAPbio said there would be a four-month suspension of its RFA process (which will presumably end pretty soon). 

What are your views on the merits of creating a central service like the one proposed by funders, what do you expect will become of the ASAPbio RFA, and what is the likelihood that we will indeed see a central service created?

RS: Actually, ASAPbio already officially ended that RFA.

We view “centralized” as meaning a standardized platform on which multiple, discrete services can be built. Standards are definitely enhanced by a common set of understandings, goals, requirements, formats, priorities, and roadmap. Where our ASAPbio proposal was unique is that we emphasized a federated solution: the community-based management of that set of standards and the roadmap of features that would determine how the preprint service operates.

We believe that a central, community-managed platform can exist to support this diversity while standardizing around common technical priorities and innovations like submission, moderation, reviews, storage, and access, to name a few examples. I think that’s why arXiv and bioRxiv, among other preprint services, supported our ASAPbio application.

Our aim is to let many flowers bloom, and support their discoverability and interoperability. Because our solution was not a central service as initially conceived in the RFA, we are hoping that we can still find funding to support this emerging community of preprints.

RP: Such discussions are, of course, inevitably controversialand raise challenging technical and political issues. What do you think ought to be the key considerations when creating a central preprint service?

RS: Effective expansion of preprints requires modern, open infrastructure to operate preprint services and facilitate deposit and discovery. It also requires community-based governance to test innovations, share good practices, and establish norms and standards across scholarly domains so that technical innovations benefit the entire community.

Also, this infrastructure and content must be public goods to ensure that the user public will always have free deposit and access to preprints. The long-term costs for managing a community-based service will be substantially lower than other approaches because the interests of many stakeholder communities are satisfied with shared tools.

New issues


RP: Discussions about central or distributed approaches aside, the newly energised preprint movement has drawn attention to a number of new issues, including when and how preprints ought to be screened and retracted (here and here). As the use of preprints continues to grow, what do you think are the key issues arising, and how readily resolvable are they?

RS: Yes, there are many interesting issues emerging. Most of these challenges are very good ones for the community to address because they present an opportunity for scholarly communication to become more aligned with how knowledge actually accumulates. The core challenge is that the present system of scholarly communication is static, but scholarship is dynamic.

What do I mean? In the “traditional” system, there is no scholarship until it is published. Then that article is a singular, permanent object with occasional exceptions (e.g., retraction). Actual knowledge building doesn’t work that way. Research is dynamic, understanding is ever-changing, errors are discovered, improvements are made, etc. etc. etc. Our system of imposing static objects (publications) on a dynamic system creates inefficiency. A simple example is corrections: Making a correction to a published article is painful, slow, and ineffective.

In the “new” system, scholarly communication embraces dynamism. Papers can change over time, and versioning will help keep track of those changes. Preprints is just a first step in that evolution. There are lots of challenges to solve, but they are exactly the challenges that scholarly communication experts should be spending their time-solving. Getting scholarly communication aligned with the reality of scholarship will be a boon to knowledge building.

RP: OA advocates have become increasingly concerned that legacy publishers are co-opting the open access movement. This, they say, will stifle innovation, allow publishers to continue to overcharge for their services, and hold the research community to ransom. For these critics, the early attraction of preprint servers was that they held out the promise of disrupting the status quo, and so would allow scholarly communication to finally adapt to the networked world.

However, I sense some concern emerging that the preprint movement will itself be co-opted by legacy publishers and that this will see preprint servers become no more than convenient pools for legacy publishers to fish in when looking for papers to publish (at extortionate rates). The fear is that preprint servers will therefore do no more than prop up the legacy journal, and so once again hamper the innovation sorely needed, and help publishers continue to overcharge for their services.

What are your views on this? And what role do you see preprint servers playing in the scholarly communication landscape going forward? To what extent do you think they will disrupt the status quo?

RS: It should surprise no one that existing publishers are moving into the space. If I were CEO of a legacy publisher, I would definitely move into the preprints market and connect it with my organization’s other services. It is the future of scholarly communication, and failing to do so would likely mean my company’s slow demise. So, in that sense, yes, we perceive preprints as highly disruptive to the status quo. Essentially, preprints is an accelerant on the transition to OA business models.

We don’t see a problem with commercial services promoting preprints. A healthy marketplace will have options for users to stimulate competition and innovation. Our organizational mission and concern is to make that marketplace open so that services are competing on quality, not on ability to achieve lock-in.

Incremental revolutionaries


RP: One thing that has focused people’s minds here was the recent announcement that COS has entered into a partnership with the American Psychological Association (APA). The announcement comes in the wake of news earlier this year that the APA had been issuing takedown notices to researchers who had posted their papers online. Some also believe that the APA has always been inclined to drag its feet over open access, and even to resist it (e.g. here and here).

On the other hand, some greeted news of the partnership as evidence that the APA is finally committed to embracing OA wholeheartedly. Which view do you subscribe to? And to what extent does COS expect potential partners to be as committed to openness as it is before working with them? Or would it be more accurate to say that COS is pragmatic about such matters?

RS: APA, like other publishers, plays an important role in scholarly communication. Our mission is to increase openness, integrity, and reproducibility of research. To meet that mission, we will work with any stakeholders that are willing and able to take proactive steps toward more openness.

APA is not prepared to go all-in on OA as far as we know. They still operate with subscription-based publishing and we don’t have any indication that they are soon to change that. But, APA is positive and proactive about other elements of open practices including preprints and open data.

APA will be offering badges for open practices, using the OSF as their data host, have chosen PsyArXiv as their default preprint server, and are working to enable easy interaction between their journal submission system and PsyArXiv. These are important steps.
  
Ultimately, we are incremental revolutionaries. We recognize that stakeholders in scholarly communication have different positions and concerns about the implications of open science. We work with the reality of stakeholders’ existing positions to initiate a shift toward greater openness.

Simultaneously, we are building free, open infrastructure that can accelerate disruption of the status quo and shift the incentives for publishers and others to transition business models toward a more service-based marketplace.

RP: Thank you for taking time to answer my questions.


Rusty Speidel develops all of COS’ branding, messaging, and communications strategies and platforms. His primary focus is on spreading the word about open science practices and increasing adoption of those practices across the entire research community, from researchers to publishing partners. This includes maintaining our web, social, conference, and communications infrastructure and developing relationships with influencers around the world interested in promoting open science

No comments:

Post a Comment