Recently I was contacted by a student from a Russian university who is writing a dissertation on the influence of open access on modern scientific communication. She sent me six questions. The questions and my answers to them are below.
Why does society need science to be open?
Q: It’s a rather common opinion (at least among Russian researchers) that the research community has access to all the materials it needs, and non-scientists are not interested in this information as they can’t understand it or use it. Why does society need science to be open?
A: Yes, I think these are common views amongst researchers everywhere. Much has been said and written about why the world needs open science but for me, there are essentially two main reasons: transparency and efficiency.
Transparency has become important if only because science appears to be facing a major credibility crisis right now. There are a number of reasons for this, not least the so-called reproducibility crisis (also referred to as the replication crisis) that has become apparent in recent years. In addition, we have seen a rise in research misconduct and detrimental research practices (which of course is related to the replication crisis). There is also increasing suspicion of science and scientists within society. The latter is part and parcel of a global loss of faith in professionals, a phenomenon captured in an oft-cited statement by UK politician Michael Gove – who in 2016 declared that people “have had enough of experts”.
Coupled with the “fake news” phenomenon we are experiencing today this is a dangerous development as it suggests that emotions, prejudice and ideology may increasingly be displacing facts. Let’s not be mistaken, the new scepticism about professionals and distrust of scientists has real-world implications.
In fact, the seeds of the loss of faith in scientists were sown some while ago, as a result of things like the MMR vaccine controversy, the exaggerated claims that we have seen scientists and pharmaceutical companies make about the efficacy of drugs like Vioxx (scholarly publisher Elsevier was associated with this activity by producing fake journals, apparently intended to promote drugs), and conflicting claims over genetically modified food. Additionally (in the US in particular), we have seen a growing gap between the public and scientists over creationism-evolution, and political rejection of scientists’ warnings about global warming/climate change.
Mitigating the scepticism
We must hope that open science and the greater transparency it affords can play an important role in mitigating this scepticism and distrust of scientists. If, for instance, all research papers, and the data generated during the research process, were freely available online scientific results could be checked.
And if we are talking about the wider issue of open science (rather than just open access and open data), then I would point out that the growth of clinical trials registries and the pre-registration of studies will increase transparency too. In providing public access to information about trials and studies the greater transparency that results should help reduce or eliminate unethical practices like HARKing and P-hacking. It would also go some way to address the problem of positive publication bias, in which negative or null results are today far less likely to be published than positive results. Amongst other things, this helps pharmaceutical companies to hype their drugs inappropriately.
We can also hope to see increasing interest in opening up the entire research process – by, for instance, the use of open notebook science techniques. Here I am talking about the kind of things that Jean-Claude Bradley pioneered a decade ago. This too brings greater transparency.
In terms of making science more efficient, if research papers and data were all freely available online (particularly null results) it would be easier for scientists to avoid wasting public money by unknowingly repeating experiments. Freely available data also allows for cross-pollination between disciplines and enables other scientists to find patterns in data that the producers of the data did not, if only because these other scientists will be looking at the data from another angle.
Finally, if research papers and data were all made freely available it would be possible to deploy machines to text and data mine (TDM) them. Amongst other things, this would allow computers to provide far more substantive aid to researchers and, some argue, it would see machines start to make new scientific discoveries on their own. All these things would clearly make research more effective.
How one makes TDM commonplace is, of course, a very different matter, not least because of the continuing (and perhaps intractable) barriers that copyright imposes.
On the issue of non-scientists having access to research: I think the growth of citizen science suggests that it is no longer true (if it ever was) that members of the public have no need to access research, or that they cannot understand it.
True, most citizen science today consists of little more than recruiting members of the public to do grunt work (counting butterflies, bugs or birds, or staring at images of galaxies on their computer), and then have them hand the results over to professional scientists in the lab. I.e. the “real” science continues to be undertaken by professionals. I would hope, however, that we can move beyond this. Citizens can also do scientific work – even, it would seem, a 9-year old.
Open licensing and bronze OA
Q: Databases like Web of Science (WoS) and Scopus now indicate if articles are open access or not, but publishers often open some materials for a short period of time and then close them again. It seems to be a bit misleading when articles drift from open to closed status and it also creates uncertainty over the current state of OA in different disciples. Does open access require the use of open licenses? Can we call “bronze” access “open”?
A: You draw attention to a couple of serious problems. The term bronze OA (where articles are “made free-to-read on the publisher website, without an explicit open license.”) stems from a paper published in PeerJ earlier this year. The issue of papers being made OA only temporarily (which is far more likely if a licence has not been attached to a work) was highlighted by Stevan Harnad as long ago as 2006 when he talked about what he called “peek-a-boo OA”. This reminds us that many open access issues are long-standing and hard to resolve!
But is bronze access really “open”, and does OA require the use of open licences? That depends on your point of view, and your definition of open access!
When those who attended the 2002 BOAI meeting adopted the term open access and set out to define it, they did not specify the use of a licence. In fact, if one looks closely at the actual definition of open access, it becomes apparent that two important words are missing – namely “immediate” and “permanent”.
Thus according to BOAI, open access implies “its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.”
While the BOAI section on open access publishing suggests that publishers “will use copyright and other tools to ensure permanent open access to all the articles they publish” this is not in the actual definition. Moreover, as I say, no specific license was named.
Today OA advocates argue that the BOAI definition implies use of the most liberal Creative Commons licence (CC BY). But not everyone agrees. Perhaps part of the problem here is that at the time of the BOAI meeting the CC licences had not been released.
But I think the larger problem is that no one thought to create an official OA organisation, or foundation, in order to police the use of the term open access and/or monitor how it is used. This is an issue I raised in 2006 (here).
These omissions have allowed publishers to claim works are open access when – in the view of OA advocates – they are not. Consider, for instance, what the publisher Springer Nature says of OA. Open access, it asserts, “refers to free, unrestricted online access to research outputs such as journal articles and books. OA content is open to all, with no access fees.”
Whatever one’s views of the BOAI definition, Springer Nature’s definition of OA would appear to be a watered-down version.
The problem of licensing is all the greater with green OA. Since subscription publishers routinely acquire exclusive rights in the papers they publish they are able to impose embargoes and so delay the moment when works are made open access. Green OA, therefore, cannot provide immediate open access. Moreover, it means publishers are (generally) able to control where papers can be self-archived, what licence is attached to them, and so what can be done with them, and how they can be used.
So OA can mean different things to different people, and the high percentage of bronze OA means that many papers are vulnerable to being re-enclosed.
So OA can mean different things to different people, and the high percentage of bronze OA means that many papers are vulnerable to being re-enclosed.
Disciplinary differences
Q: Are there any differences in how open access should develop in different disciplines and which? For example, arXiv is a good instrument for physics and mathematics, but it seems that researchers in the humanities are not that motivated to publish preprints or use repositories at all.
A: Yes, there are differences between disciplines, and I think OA does need to develop differently in different fields. These differences are often cultural. For instance, physicists were sharing preprints long before the arrival of the internet, so arXiv was a natural extension of that sharing habit and gave them a head start.
By contrast, in engineering concern over patents and other intellectual property appears to have made open access a particularly hard sell.
In the case of biology and biomedical research, the problem has been less a cultural one, but a consequence I think of publishers successfully resisting early OA initiatives. So, for instance, they succeeded in having a pre-internet preprint service that was developed at the US National Institutes of Health (NIH) in the 1960s closed down. A second NIH initiative intended to create a life sciences service 30 years later was emasculated before launch. In 1999, the then NIH director Harold Varmus proposed what he called E-BIOMED. Following heated criticism and effective lobbying by publishers, however, the preprint component of E-BIOMED was abandoned, and a watered-down version of the service was launched as PubMed Central in 2000.
In some cases, we have seen a combination of cultural and practical considerations hamper open access. In the humanities, for instance, the nature of research outputs produced by humanities scholars, and the primary format they use (the monograph) has made progress difficult. Books do not lend themselves to digital sharing as readily as scientific papers, both because of their length and their online readability. Additionally, the arguments used in the works of humanities scholars are often more personal in nature. Originality is often seen to lie in reimagining and reinterpreting known facts rather than producing a welter of new facts (certainly for historians).
For this reason, the most important issue open access raises for humanists is that of licensing. This has become all too evident as institutions and funders have begun to consolidate around the idea that OA works must have a CC BY licence attached to them. For humanists, this raises a number of issues including a fear that if derivative use is permitted their work will be plagiarised and concern that they could suffer reputational damage if it is translated poorly, or incorrectly.
There are also concerns that authors’ arguments could be deliberately distorted. (For this reason, humanities scholars dislike not just CC BY, but any licence that permits derivative use). These issues are outlined in a recent discussion document published by the UK’s Royal Historical Society (here).
Finally, we could note that many journals in the humanities are published by learned societies who depend on the subscriptions they earn from the journal to survive. And since funding is far more limited in the humanities the opportunity to charge APCs is often minimal. See here for a further discussion on this.
Green OA
Q: Four years ago, green OA seemed to be a good alternative, but where does it stand today? For example, in the field of media communication (which I am studying as part of my dissertation) authors who publish in the top journals almost never post their papers in a repository (even after the embargo period expires). How can this problem be solved? Is there still any place and role for green OA?
A: It’s true that most researchers have shown little interest in self-archiving in institutional repositories (IRs), although as you point out, physicists have long been happy to deposit their papers in arXiv. We could also note that most researchers routinely post their work in commercial repository services like ResearchGate and Academia.edu (often in breach of copyright). I think this is partly because depositing in an IR has been presented to scholars as some kind of moral duty, and librarians pester them to death with emails and social media messages urging them to do so.
But perhaps the real issue here is that librarians have signally failed to make institutional repositories either interesting or useful. One need only compare the benefits of depositing a scholarly work in a commercial repository service like Academia.edu – where it may attract a lot of attention from other researchers and allow the author to network in advantageous ways – with depositing in an institutional repository, where the work may wallow in obscurity, and serve only to help universities increase the level of monitoring and Taylorism that they increasingly subject researchers to today.
How can the problem of low deposit rates be addressed? OA advocates argue that the solution is to force researchers to do so by mandating them. Unfortunately, one consequence of this is that funders and institutions have been introducing ever more draconian deposit mandates. Some worry about the implications of this for academic freedom (which is a growing concern today – see here, here, here), especially when the mandate requires works to be made available with a liberal licence attached. This mandatory approach is at its most oppressive today in the UK. That problem I see is that rather than win hearts and minds this tends to alienate researchers from open access.
I have outlined my views on this issue here. Personally, I think researchers should be persuaded to embrace OA, not forced to do so.
In Baden-Württemberg in Germany, meanwhile, a number of professors have challenged a mandatory deposit requirement on the grounds that it infringes their academic freedom.
Institutional repositories and commercial repository services aside, we have recently seen renewed interest in preprints servers. This has seen the successful launch of bioRxiv and the emergence of a growing number of new servers using the Open Science Framework preprint platform. (See here for instance).
These servers are often set up in opposition to publishers, but in fact would seem to be as vulnerable to being co-opted by legacy publishers as the wider open access movement has been co-opted. Many of these servers may become little more than submission tools for legacy journals for instance.
On the other hand, if the new generation of preprint servers were to encourage a wave of scholar-led overlay journals like Discrete Analysis to be launched, one could imagine a different scenario playing out.
Back to your question: Is there still a place and role for green OA? I really don’t know. Much will depend on how events unfold (as I discuss in my answer to your last question), and the success of the new preprint servers would seem to be an important factor.
Another development to consider here is the move by Elsevier to acquire repository services like bepress and SSRN. This could see a revival of IRs and green OA and have a big impact on the preprint scene. But what that impact would be remains uncertain.
For instance, I was struck by some remarks made by Roger Schonfeld on The Scholarly Kitchen blog recently. He said: “In acquiring SSRN and bepress, Elsevier is also developing the opportunity to make an end run around publishers with which it competes. Through these services, it now has the ability to substitute the preprint for other publishers’ version of record in providing seamless access to an increasing share of the published literature.”
That IRs could become a competitive tool for publishers would be a strange turnaround given that they were conceived as tools to subvert publishers. It would allow legacy publishers to control and dominate green OA in a more innovative way than they have done heretofore (by, for instance, imposing embargoes), but such a development would also help them to further embed themselves into the new OA environment and so continue to charge unjustifiably high prices for the services they provide. This is a matter of public concern since the bill is ultimately picked up by the taxpayer.
APC model
Q: How can developing countries deal with the APC model of OA that is now emerging in Europe and seems likely to grow even faster in the next two years? For example, in Russia, only a very few top universities can pay $1,000 to $3,000 per article, and while we can use local journals these are usually published in Russian. This could see us effectively having to work outside the global scientific communication system. In an APC world, we would be able to read more research but could struggle to get our own work in front of a global audience. You mentioned this problem in a recent interview. Do you have any idea how to solve this problem? What should developing countries do?
A: Yes, the rise of pay-to-publish gold OA is a real problem, especially for less wealthy countries. As you know, there are growing calls in the global North to flip all subscription journals to gold OA, a strategy that is being spearheaded by the OA2020 initiative. If this “global flip” were to become a reality many in the developing world could expect to see paywalls replaced by publication walls.
Interestingly, academic negotiators are hoping to leverage the flip strategy to also force publishers to lower their prices. The plan may, therefore, already be hitting the rocks. In Germany, for instance, there is a long-running stand-off between the negotiating team known as Projekt Deal and Elsevier, and I have yet to see any real sign that the publisher plans to lower its prices.
I was, therefore, struck to read in a report about a recent European meeting of OA negotiators a quote by Gerard Meijer, director of the Fritz-Haber Institute (and a member of Projekt Deal) in which he said: “If we keep moving at this pace, we’ll never reach our goals.” The reporter added that this confirms “that a future plan might not include the academic publishers whatsoever.”
And in a post-meeting press release published by the German Rectors Conference (HRK) – which organised the meeting – HRK president Horst Hippler said: “We see that the transition to open access is too slow, and I am utterly upset and concerned about this. It was broadly echoed in the meeting, that the limits of partnership of academic institutions with these large publishers have been reached.”
That there were apparently no representatives for OA2020 at the meeting makes me wonder if a new rift is opening up in the open access movement – with some happy to continue negotiating OA Big Deals with legacy publishers, while others are beginning to look for more radical alternatives in which traditional publishers play no part. In any case, it seems likely that a new fault line will develop in the OA movement around this.
So, what should developing countries do? Egypt’s Mahmoud Khalifa suggests two possible strategies. Less-wealthy countries could, for instance, focus on building up their own local low-cost journals (perhaps using platinum OA, where neither an access nor a publishing fee is charged). There would still be the language issue to consider, but perhaps this will be resolved over time as a result of improvements to machine translation technology?
The larger problem, however, would seem to be one of visibility. Since local journals are not normally indexed in WoS and Scopus (95% of the articles indexed in WoS in 2017 are in English, for instance) papers published in local journals will struggle to get the audience they deserve.
Khalifa’s second suggestion, therefore, is for developing countries to create their own national and regional indices. However, this would presumably require substantial funding and most funders and research institutions in the developing world seem more focussed on having their researchers publish in international journals than developing local solutions.
How much of a solution local indices would be is, in any case, not immediately clear to me. As you will know, in 2009 a Russian Science Citation Index (RSCI) was created. Wikipedia reports that only around 5% of the journals in RSCI are in global databases. But to what extent does RSCI increase the visibility of Russian journals? The Wikipedia entry suggests it does, but I don’t know.
Another possibility is that if a wave of new OA platinum journals emerged in the global North, presumably researchers in the developing world could publish in them at no cost, and perhaps these journals would stand a better chance of being indexed in international services like WoS and Scopus. It would not be an ideal solution, but it strikes me as a possibility.
In short, there are things the developing world could and should be doing, but unless the developed world takes account of the needs of less-wealthy countries as it sets about creating the new OA infrastructure it could end up making the deeply unsatisfactory situation that these countries find themselves in today far worse.
One gets a sense of how discriminatory the new environment could end up being if one considers Elsevier’s response to European demands for gold OA. Last year the publisher proposed what it calls region-specific OA. This would see those countries able and/or willing to pay for gold OA provided with superior access to research, while those unable or unwilling to pay would get delayed access at best – a proposal that seems to me to be fundamentally at odds with the vision outlined at BOAI.
Future developments
Q: How do you see the further development of OA around the world? How will the new scholarly communication infrastructure be arranged in 5-7, 10-15 years? Which new services/platforms need to appear to change the system/make it more convenient/innovative?
A: I really would not want to make any predictions about the future of scholarly communication. I believe we are in a very volatile historical moment and everything is currently up for grabs. I can even imagine us seeing a revival of interest in the subscription journal.
That said, I have suggested some possible futures above and I think a number of developments are worth watching. For instance, the preprint server resurgence I mentioned, used in conjunction with post-publication peer review (PPPR) and overlay journals, could see a far more affordable, fair, effective, and transparent system emerge, particularly if these journals were platinum OA.
So it could be that in the future most papers will start their life on a preprint server, be subjected to PPPR, and then perhaps be pulled into a scholar-led overlay journal like Discrete Analysis and Quantum.
On the other hand, the growing interest in Europe in negotiating OA Big Deals with legacy publishers (a topic I have discussed in some detail here) would seem likely to simply replace one overpriced system with another (albeit it an open access system).
And as I noted, if legacy publishers were persuaded to flip all their subscription journals to OA (the “global flip”) the new system would pose a particular challenge for those in the developing world – for the reasons outlined above and in the Q&As I did on the topic recently (here). However, unless most countries signed up to a global flip it is hard to see how it could succeed.
At the same time, we can see a growing trend for research funders to build their own publishing platforms (currently mainly using the technology developed by F1000Research). This is the route being taken by The Wellcome Trust and The Gates Foundation for instance. And the EU recently published a tender as the first step in commissioning a third-party to build a European-wide platform.
We can perhaps assume that these new platforms would introduce a less-costly publishing system than one provided by commercial publishers, and they might allow the research community to take back ownership of scholarly communication. But it is unclear whether many researchers will find these services any more attractive than the institutional repositories they have spurned. We might also want to worry about the implications (and possible unintended consequences) of funders becoming their own publishers.
To my mind, the greatest danger the research community faces today comes from legacy publishers’ current moves to insert themselves directly into the research workflow process, by creating and acquiring new workflow tools. This could allow them to lock themselves not just into the new OA environment, but into the entire research process. This is a topic that Schonfeld takes a particular interest in. See his thoughts on the topic here for instance. Personally, I cannot imagine a scenario in which this would turn out to be a desirable development.
We should also note that even if the research community took back the publishing process – by means, for instance, of building out funder publishing platforms and/or launching scholar-led platinum OA journals – and all future research was made available on an open access basis, publishers would still effectively “own” the huge backfiles of research papers that they have amassed over time (or at least own them until the copyright in them expired).
This gives them a huge advantage, not just in continuing to sell access to research, but in creating and acquiring workflow solutions.
This suggests that governments really ought to be thinking about taking back ownership of these backfiles. Unfortunately, however, the current political climate makes it extremely unlikely that this will happen.
Platform capitalism?
Finally, there is the role that sites like Sci-Hub will play going forward to consider. Right now, this is not clear. Logically these sites would seem to pose a serious future threat to publishers (assuming publishers fail in their attempts to have them forcibly shut down, or to convince institutions to accept a technical fix).
On the other hand, although Sci-Hub often appears to be the only place where those in the developing world can access international research papers, since they cannot afford to buy access from publishers in any case this activity may actually be no threat to publishers’ revenues. Likewise, I do not think there is any evidence that such sites are triggering subscription cancellations in the global North, even though researchers in these countries appear to be regular users of Sci-Hub too.
On this last point, we could note that in its recent IPO prospectus (an IPO that was cancelled at the last minute) Springer Nature said: “We believe that our subscription customers access SciHub in parallel, but not as a replacement to, our traditional subscription services. To our knowledge the availability of our content through these aggregators has not resulted in subscription terminations.”
So, there are a number of different possible futures. However, if I were asked to wager, I would bet that the most likely scenario is that legacy publishers will continue to control and dominate scholarly communication (however they achieve this), and that the taxpayer will continue to pay through the nose for the services they provide.
I will end by referring you to a recent article by Philip Mirowski. While I don’t agree with everything he says, I think he is probably right to argue that many of the problems science faces today are a consequence of neoliberalism, and that open science is itself turning out to be a neoliberal phenomenon, one that could see the research process subject to its own form of platform capitalism.
This is not what those attending BOAI envisaged, but it could easily end up being the outcome of the movement that BOAI launched. Consequently, the world may get near universal open access, but in the process of bringing that about the research community will likely miss a big window of opportunity to rid itself of what critics like to refer to as the “ruthless capitalists” of scholarly publishing.
7 comments:
It seems like it is a " cat/mouse game", where cat are legacy publishers and mouse are OA advocates, academia... New trends, new solutions to get into OA are encapsulated by the same old publishers to keep the system as it is. Result: no problem is solved, only moved
On the topic of public interest in reading scientific papers, one argument is missing.
Patients do read relevant clinical and preclinical research papers, in fact many are desperate to. Unfortunately, in medical literature paywall is the rule, also because doctors see no need for patient to stick their noses into stuff only elite of society can comprehend. Additionally clinical researchers still debate if preprints are the work of Satan .
Leonid, you are, of course, correct. I described my own personal experience of trying to get information about a (minor) procedure a surgeon wanted to do on me in an interview I did with Editage in 2015.
Thanks Richard, an fascinating post with good questions and answers! I think you make a good point about the 'global flip' and how problematic this would be for different developing countries/regions. It seems, at least from the South Asian countries I've worked in (probably different in Africa) that platinum/diamond (or very-low-APC) OA is already the norm there, and there is plenty of scope for these national journals be be developed further and provide an outlet for national research... if only universities in those countries could stop pressuring researchers to only publish in 'high impact factor' and 'international' journals.
Also interested to hear about the Russian Citation Index. I stumbled upon two other regional indexes recently - the Korean Citation Index (http://www.kci.go.kr/kciportal/landing/index.kci) and the Thai Citation Index (https://www.kmutt.ac.th/jif/public_html/announcement_58.php) - very interesting to see non-English speaking countries developing these alternative indexes. I wouldn't be surprised if there was also a Chinese version?
I have to agree that libraries have done a poor job with IRs. As it happens, I posted this to the lib-licence list the other day, in response below to a post about the Canadian Association of Research Libraries (CARL) releasing "Journal Subscription Cost Data" and describing my attempts to get access to it from https://www.frdr.ca/repo/handle/doi:10.20383/101.033
"It's great that this information is being collected and made available, but the method of doing so shows why IRs still have such a long way to go to be useful to the academic community.
The DOI listed takes me to a metadata page where the first download option I can see is for the citation, rather than the dataset. There is no obvious link to the dataset on my screen, although I can see the title twice.
Once I scroll down, the first link I see (other than to Creative Commons) is helpfully labelled "Endpoint and path to dataset", a phrase used by no human, ever. If I click on this it takes me to a Globus log in page, in another tab. Do most people know what this is? My institution isn't on the list so not sure how best to log in. Not sure why I should have to. Isn't this Open Data?
So I went back to the metadata page (couldn't use "Back" because I was in a different tab). Under the only obvious link (that took me nowhere), there are two headings: "submitted data" and "globus_metadata.json". They both have the little cloud icon next to them that I have learnt takes me to Globus and a log in I don't have. The json link also has a different download-looking arrow next to it, but when I click on that I get a whole bunch of unformatted text in my browser window.
So I go back again. I click on the + next to "submitted data" and I finally see the files that I want, with file extensions I can understand. And that I can download. That was a lot of clicking and scrolling, and frankly I almost gave up a couple of times.
I don't want to pick on this IR in particular (although I was kind of shocked to realise that it has only been developed in the last 2 years, because it looks like something from 2003), because this is a pretty typical user unfriendly experience that you get from most library systems and IRs. Yes, sharing data is complicated, but does it have to be this hard? If Amazon made it this tough to buy a book Jeff Bezos would still be working out of his garage."
On a different topic, I don't understand this fuss about unauthorised translations. If you can't speak or read another language well enough to translate your work yourself how can you tell if the translation is accurate anyway? We've all read awful translations by established publishers, so that won't protect you. And once it's out there, there's no bringing it back.
Richard, to the first question (which is one I often deal with), another answer is that the audience is always far larger than you imagine. Consider: nearly 50% of citizens from the 35 nations that make up the OECD are now educated to Batchelor’s Degree level and therefore capable of reading and understanding a lot of the scholarly literature. A great number go on to practice what they learned in industry and would really benefit from continued access to novel research in their field. Proof that the demand is there can be seen in reports that have been released showing the extent to which freely accessible research is accessed. In the OECD’s own experience, readership increased significantly when we adopted a freemium open access publishing model; today 85% of our readership is beyond our subscription base.
Post a Comment