A frequent criticism of Open Access (OA) is that it will lead to the traditional peer review process being abandoned, with scientific papers simply thrown on to the Web without being subjected to any quality control or independent assessment. Is this likely? If so, would it matter?
The argument that OA threatens peer review is most often made by scientific publishers. They do so, argue OA advocates, not out of any genuine concern, but in the hope that by alarming people they can ward off the growing calls for research funders to introduce mandates requiring that all the research they fund is made freely available on the Internet.
Their real motive, critics add, is simply to protect the substantial profits that they make from scientific publishing.
Whatever the truth, there is no doubt that STM publishers are currently very keen to derail initiatives like the US Federal Research Public Access Act (FRPAA) — legislation that, if introduced, would require all US Government agencies with annual extramural research expenditures of over $100 million to make manuscripts of journal articles stemming from research they have funded publicly available on the Internet.
And a primary reason publishers give for their opposition to such initiatives is that it would "adversely impact the existing peer review system."
What do publishers mean by peer review? Wikipedia describes it as the process of "subjecting an author's scholarly work or ideas to the scrutiny of others who are experts in the field. It is used primarily by publishers, to select and to screen submitted manuscripts, and by funding agencies, to decide the awarding of monies for research. The peer review process is aimed at getting authors to meet the standards of their discipline and of science generally."
In other words, when a researcher submits a paper for publication in a scholarly journal the editor will ask a number of other researchers in the field whether the submitted paper warrants publication.
And Open Access is the process of making scholarly papers freely available on the Internet.
There are two ways of achieving OA: Researchers can publish their paper in an OA journal like PLoS Biology which, rather than charging readers (or their institutions) a subscription to read the contents of the journal, charges authors (or invariably authors' funders) to publish the paper.
Alternatively, researchers can publish in a traditional subscription-based journal like Nature or Science and then make their paper freely available on the Web by self-archiving it in their institutional repository (IR).
Since both methods still require that papers are peer reviewed, OA advocates point out, publisher claims that making research OA necessitates foregoing the peer review process is factually inaccurate.
And while it is true that some researchers also post their preprints in IRs prior to having them peer reviewed, they add, this is done solely in order to make their research available more quickly, not to avoid peer review. As OA advocate Stevan Harnad frequently points out, OA means "free online access to peer-reviewed research (after — and sometimes before — peer review), not to research free of peer review."
There is, however, a second strand to publishers' claims that OA threatens peer review. If OA is forced on them, they say, they will not be able to survive financially, either because they will discover that there is no stable long-term business model for OA publishing, or because the increasing number of papers researchers post in institutional repositories will cause academic institutions to cancel their journal subscriptions. This poses a threat to peer review, they add, since if publishers exit the market there will be no one left to manage the process.
However, these claims are also rejected by OA advocates, who argue that most publishers have already accommodated themselves to self-archiving. Indeed, they add, there is no indication at all that self-archiving negatively impacts journal subscriptions. Nor is there any reason, they say, to believe that a sustainable OA business model cannot be found.
Far from perfect
But supposing publishers are right, and OA does eventually cause peer review to be abandoned? Would it matter?
After all, while researchers and publishers say they place great store on it, peer review is far from perfect. In September 2000, for instance, the UK Parliamentary Office of Science and Technology (OST) pointed out that many view peer review as "an inherently conservative process … [that] … encourages the emergence of self-serving cliques of reviewers, who are more likely to review each others' grant proposals and publications favourably than those submitted by researchers from outside the group."
Publishers have also been known to concede that there are problems with peer review. Writing in 1997, for instance, the then editor of the British Medical Journal (BMJ), Richard Smith, described peer review as "expensive, slow, prone to bias, open to abuse, possibly anti-innovatory, and unable to detect fraud." He added: "We also know that the published papers that emerge from the process are often grossly deficient."
In fact, it seems that the most that can be said of peer review is that we have failed to come up with anything better. Following the decision by Science to retract papers it had published by Dr Hwang Woo-suk — after it was discovered that he had faked claims that he had obtained stem cells from cloned human embryos — for instance, publications consultant Liz Wager said of peer review "it's a lousy system but it's the best one we have."
True, there have been some attempts to improve things. One frequent criticism of peer review, for instance, is that since the scientists who review submitted papers do so anonymously there is little accountability — no doubt assisting "self-serving cliques of reviewers" to arise.
For this reason a few more enlightened publishers have tried to make the process more transparent. In 1999, for instance, the BMJ announced that it would in future "identify to authors the names of those who have reviewed their papers, including the names of our in house editorial and statistical advisers."
And in June 2006, Nature also appeared to have accepted that greater openness would help, announcing an experiment in which some of the papers submitted to the journal would be subjected both to the traditional confidential peer review process, while also being placed on the Web for open, identifiable public comment.
The aim, Nature said, is to "test the quality of unsolicited comments made during an open peer review process, against the traditional process where no unsolicited comments [are made]".
Too little too late?
Nature's experiment is not a sign that the journal is contemplating abandoning peer review — it is simply exploring what is generally called "open review." We should add that open review is not the same things as Open Access, since Nature remains a traditional journal based on a subscription-based publishing model (although it does permit authors to self-archive papers that they have published in the journal six-month's after publication).
But are traditional journals like Nature in danger of doing too little too late, and being left behind? For in recent months OA journals have also been exploring new ways of evaluating scientific papers. And they are taking a far more radical approach.
In August, for instance, OA publisher Public Library of Science launched a new journal called PLoS ONE. Papers submitted to PLoS ONE still undergo peer review, but a less rigorous form of peer review. Once published, however, the papers are also subjected to post-publication review on the Web.
As the managing editor of PLoS ONE Chris Surridge explained to me when I interviewed him earlier this year, PLoS ONE referees are asked to answer a simpler question than that asked by traditional peer review. That question is: "Has the science in this paper been done well enough to warrant it being entered into the scientific literature as a whole?"
The key point, added Surridge, is that PLoS ONE does not believe that peer review should end on publication. "Every paper will have a discussion thread attached to it. We are also developing ways to allow people to directly annotate the papers themselves."
What PLoS ONE and Nature have in common, of course, is that they are both experimenting with new types of peer review. However, the PLoS ONE model differs from Nature's in a number of significant ways. First, it will utilise a less rigorous form of peer review prior to publication. Second, the open review part of the process will take place not in parallel with the traditional review stage, but after publication. Third, once published the papers will be OA. As such, they will not be locked behind a financial firewall, available only to subscribers.
It is worth making that last point because Nature's ability to experiment with peer review is surely constrained by the limited accessibility of its papers.
For this reason alone, Nature's approach will inevitably have to be more cautious than an OA journal like PLoS ONE can adopt, even though it clearly acknowledges that profound questions need to be asked about traditional peer review — as evidenced by the online debate Nature has been holding on the topic.
Root and branch
As a consequence, while traditional publishers are having to continue playing up the importance of the current peer review system while tinkering at the edges, OA journals are are able to take a root and branch approach to reform.
On the other hand, however, this leaves the OA Movement vulnerable to allegations from those who oppose initiatives like the FRPAA that it is bent on destroying the scientific process.
Yet the more the failings of traditional peer review are examined, the more radical seem to be the solutions that people feel are necessary. In September, for instance, a group of UK academics keen to improve the way in which scientific research is evaluated launched a new OA journal called Philica.
Unlike both Nature and PLoS ONE, Philica has no editors, and papers are published immediately on submission — without even a cursory review process. Instead, the entire evaluation process takes place after publication, with reviews displayed at the end of each paper.
As such, the aim of the review process is not to decide whether or not to publish a paper, but to provide potential readers with guidance on its importance and quality, and so enabling particularly popular or unpopular works to be easily identified.
Importantly, argues Philica, its approach means that reviewers cannot suppress ideas if they disagree with them.
Specifically, the evaluation process consists of anonymous, recursively weighted reviews. As the journal's FAQ explains, "When people review Philica entries, they rate them on three criteria: originality, importance and overall quality. These ratings are on a scale of 1-7, where 1 is very poor, 7 is very good and 4, the mid-point, represents a rating of middling quality."
Moreover, unlike PLoS ONE, Philica does not charge author-side fees. This is possible, the FAQ explains, because the overheads are minimal. "Philica can be made free to everybody, whilst retaining the benefits of peer-review, because of the open, online submission and refereeing processes."
Philica is not the only new initiative to push the envelope that bit further. Another approach similar in spirit is that adopted by Naboj, which utilises what it calls a dynamical peer review system.
Modelled on the review system of Amazon, Naboj allows users to evaluate both the articles themselves, and the reviews of those articles. The theory is that with a sufficient number of users and reviewers, a convergence process will occur in which a better quality review system emerges.
Currently Naboj users can only review preprints that have been posted in the popular physics preprint server arXiv.org, but there are plans to extend this to allow reviews of papers in other open archives, including presumably the burgeoning number of institutional repositories.
As with Philica, the aim is not to assess whether papers should be published, but to guide people on the quality of the growing number of scholarly papers being made available online, regardless of whether they have been peer reviewed in the traditional sense.
Do these new services herald the end of peer review? Not necessarily. Philica, for instance, makes a point of stressing that it only accepts reviews submitted by bona fide academics, excluding even graduate students from the process — on the grounds that it "is not normal practice for students to do this to the work of fully-qualified academics, and we do not consider it desirable to change that here."
So while the review process may have migrated from the pre-publication phase to the post-publication phase, it is still peer review.
We should perhaps also stress that Naboj does not claim to be an OA journal, but a web site to help people find useful research. As such it is more of an adjunct to the growing tide of OA literature than an alternative scholarly journal.
Moreover, the vast majority of OA journals — most of those for instance published by the two main OA publishers BioMed Central and Public Library of Science — still practice traditional peer review before publishing papers.
Nevertheless the implications of new initiatives like PLoS ONE and Philica are surely clear.
Complicating an already confused picture
The problem for OA advocates, of course, is that such developments are complicating an already confused picture, and leading to a great deal of misunderstanding about the relationship between OA and peer review.
The consequences of this were amply demonstrated at the beginning of October, when an Associated Press news story about PLoS ONE and Philica was published. As is usual with AP stories, the article was syndicated to multiple newspapers — and with every republication the headlines became increasingly alarmist.
The Desert Sun, for instance, reprinted the article with the headline and subtitle: "Online journals threaten scientific review system: Internet sites publishing studies with little or no scrutiny by peers"; The Gainesville Sun, published it with the headline, "Online publishing a threat to peer review"; and The Monterey Herald went with: "Academic journals bypass peers, go to Web."
"Is this routine editorial carelessness or spreading paranoia?" asked OA advocate Peter Suber exasperatedly on his blog.
The answer, perhaps, is a bit of both. Certainly, the spreading confusion is a boon to publishers bent on killing the various proposals intended to make OA mandatory — since it is causing many to conclude that OA represents a serious threat to the scientific process.
The paranoia reached a peak when the AP article attracted the attention of Harvard's college newspaper, The Harvard Crimson, which responded by publishing a muddle-headed editorial called "Keep Science in Print".
The launch of journals like PLoS ONE, it warned, threatens to "create a morass from which science might not emerge. Results will be duplicated, communication retarded, and progress slowed to a standstill … [as] … scientists will have no way of knowing which discoveries and experiments merit their time and interest. Instead they will spend inordinate amounts of time wading through the quicksand of junk science to get to truly interesting work."
Unfortunately, pointed out Suber, The Harvard Crimson editorial was seriously flawed, failing on at least two important counts. "First, it confuses open review with non-review. Second, it assumes that all online-only journals (open access and subscription-based) use open review — i.e. that traditional peer review requires print."
For those impatient to see OA prevail, the spreading confusion is very frustrating. What OA advocates therefore need to do, suggests Harnad, is insist on keeping discussions about reforming peer review separate from the debate about OA.
So while agreeing that peer review "can be made more effective and efficient", Harnad insists that any discussion about reforming it "should not be mixed up with — or allowed to get in the way of — OA, which is free access to the peer-reviewed literature we have, such as it is."
Conflation, however, seems inevitable — not just because the public is confused, but because, as Suber recently pointed out, "there are a lot of exciting synergies to explore between OA and different models of peer review." A good example of the way in which these are being explored at the Los Alamos National Laboratory, he added, was mentioned by Herbert Van de Sompel during the Nature debate on peer review.
These synergies flow from the fact that when papers are made freely available on the Web so much more is possible. And for this reason, some believe that peer review reform and OA are joined at the hip.
This was a point made by Andrew Odlyzko, a mathematician who heads the University of Minnesota's Digital Technology Centre, in a recent response to Harnad on the American Scientist Open Access Forum. "I think you go too far by denying that Open Access has anything to do with peer review," he said. "For many (including myself), Open Access is (among other things) a facilitator of the evolution towards an improved peer review system."
In this light it is not accidental that OA publishers are beginning to lead the way in peer review reform.
In any case, as the editor-in-chief of Wired magazine Chris Anderson has pointed out, it seems inevitable that the Internet will change the way in which peer review is conducted, not least because where in a print world scholarly papers have to jostle for limited space (pages), in an online environment any such constraints go away.
So where the decision in classical peer review is whether a paper is worthy enough to earn a share of a limited resource, in the online world no such decision is necessary, and the admission gates can be thrown open.
Filter and rank
In an online world, by contrast, the issue is not whether a piece of research gets to go through the gates, but how you filter and rank the expanding tide of research passing through. (But note that this implies that there are no access restrictions imposed on that research).
After all, the Internet will inevitable fill up with junk science regardless of the peer review system. The priority, therefore, will increasingly be to highlight research worthy of attention, and to flag the junk, not whether a paper is published or not.
The important point to bear in mind, says Odlyzko, is that peer review has never been the final edict on a work. "It guarantees neither correctness, nor importance, nor originality. All it does is provide some partially quantifiable assurance of those. The final verdict comes decades (and sometimes centuries) later, when scholars go back and reassess individual contributions."
But perhaps the most interesting question raised by current developments is not when and how science is evaluated, but who does it. As Anderson reminds us, on the Internet a new kind of peer review is emerging — one in which "peer is coming to mean everyman more than professional of equal rank." When people talk about peer-to-peer services, for instance, they are not usually referring to services designed to enable scientists to talk to one another.
Can we therefore assume that the work of a scientist will always be evaluated exclusively by fellow scientists? And would it matter if it were not?
Anderson points out, for instance, that Wikipedia contributors "don't have to have PhDs or any sort of professional affiliation: their contributions are considered on their merit, regardless of who they are or how they have become knowledgeable." Likewise they can delete or edit the contributions made by experts
Tellingly, a controversial study conducted by Nature in 2005 concluded that the accuracy level of science entries in Wikipedia was only slight lower than that fount of all expert wisdom, the Encyclopaedia Britannica.
And those who believe that the intellectual elite should only ever be evaluated by their coevals will surely have been depressed by the implications of the ill-informed and self-congratulatory editorial penned by The Harvard Crimson. Even the intellectual elite, it seems, can talk nonsense at times, and it doesn't take a scientist to spot it.
Given the obvious inaccuracies parroted by the Harvard scribblers, peer review traditionalists will also doubtless feel uncomfortable about the parallel the editorial drew with the evaluation of scholarly literature. "Getting into Harvard is hard, very hard," it boasted. "Yearly the gatekeepers in Byerly Hall vet thousands of applicants on their merits, rejecting many times the number of students that they accept. But getting a scientific paper published in Science or Nature, today’s pre-eminent scientific journals, is oftentimes harder."
As one of those who left a comment on the web site of The Harvard Crimson put it (evidently sceptical about the merit systems at play here), "Since most at Harvard are well-connected and can get their papers published no matter what their merit, perhaps there is anxiety about a purely merit-based system?"
So has OA sounded a death knell for traditional peer review? Perhaps. Would it matter if it has? Probably not. In fact, OA journals seem to be in the process of developing superior ways of evaluating papers, not doing away with evaluation.
A more accurate way of describing developments, perhaps, is that peer review as understood and practised by traditional journals is giving way to a number of new models. These models are more appropriate to the online world, and perhaps offer a more effective and efficient way of evaluating science. Importantly, they require that scholarly papers are made freely available on the Web — which is precisely the aim of the OA Movement.
What the debate also points to, of course, is that in an Internet-enabled world traditional gatekeepers cannot assume that their role will remain the same.
Indeed, as New York University journalism professor, Jay Rosen pointed out when I interviewed him earlier this year, many traditional gates are in the process of being dismantled. As a consequence, he added, "All kinds of knowledge monopolies — and positions of authority based on them — are wearing away."
In other words, those professionals and organisations that have gained control of society's institutions and power structures are going to have to justify the rights and privileges they currently enjoy, or lose them.
Undoubtedly this includes science publishers who believe that they should retain the exclusive right to determine how scientific papers are published; peer review cliques who think they should be the sole arbiters of what science is and is not published; and, indeed, journalists who believe that, in a world full of blogs, they can maintain a monopoly on deciding what gets reported .
Who knows, perhaps it is only a matter of time before the gatekeepers at Byerly Hall, along with their Ivy League colleagues, discover that their pre-eminent right to adjudicate on academic merit has also disappeared
PREMATURE REJECTION SLIP
Richard Poynder, in "Open Access: death knell for peer review?" has written yet another thoughtful, stimulating essay. But I think he (and many of the scholars and scientists he cites) are quite baldly wrong on this one!
What is peer review? Nothing more nor less than qualified experts vetting the work of their fellow specialists to make sure it meets certain established standards of reliability, quality and usability -- standards that correspond to the quality level of the journal whose name and track-record certifies the outcome.
Peer review is dynamic and answerable: Dynamic, because it is not just an "admit/eject" decision by a gate-keeper or an "A/B/C/D" mark assigned by a schoolmarm, but an interactive process of analysis, criticism and revision that may take several rounds of successive revisions and re-refereeing. And answerable, because the outcome must meet the requirements set out by the referees as determined by the editor, sometimes resulting is an accepted final draft that is very different from the originally submitted preprint -- and sometimes in no accepted draft at all.
Oh, and like all exercises in human judgment, even expert judgment, peer review is fallible, and sometimes makes errors of both omission and commission (but neither machinery nor anarchy can do better). It is also approximate rather than exact; and, as noted, quality-standards differ from journal to journal, but are generally known from the journal's public track record. (The only thing that does resemble an A/B/C/D marking system is the journal-quality hierarchy itself: Meeting the quality-standards of the best journals is rather like receiving an A+, and the bottom rung is not much better than a vanity press.)
But here are some other home truths about peer review (from an editor of 25 years' standing who alas knows all too well whereof he speaks): Qualified referees are a scarce, over-harvested resource. It is hard to get them to agree to review, and hard to get them to do it within a reasonable amount of time. And it is not easy to find the right referees; ill-chosen referees (inexpert or biassed) can suppress a good paper or admit a bad one; they can miss detectable errors, or introduce gratuitous distortions.
Those who think spontaneous, self-appointed vetting can replace the systematic selectivity and answerability of peer review should first take on board the ineluctable fact of referee scarcity, reluctance and tardiness, even when importuned by a reputable editor, with at least the prospect that their efforts, though unpaid, will be heeded. (Now ask yourself the likelihood that that the right umpires will do their duty on their own, and not even sure of being heeded for their pains.)
Friends of self-policed vetting should also sample for a while the raw sludge that first makes its way to the editor's desk, and ask themselves whether they would rather everyone had to contend directly with that commodity for themselves, instead of having it filtered for them by peer review, as now. (Think of it more as a food-taster for the emperor at risk of being poisoned -- rather than as an elitist "gate-keeper" keeping the hoi-poloi out of the club -- for that is closer to what a busy researcher faces in trying to decide what work to risk some of his scarce reading time on, or (worse) his even scarcer and more precious research time in trying to build upon.)
And peer-review reformers or replacers should also reflect on whether they think that those who have nothing better to do with their time than to wade through this raw, unfiltered sludge on their own recognizance -- posting their take-it-or-leave-it "reviews" publicly, for authors and users to heed as they may or may not see fit -- are the ones they would like to trust to filter their daily sludge for them, instead of answerable editors' selected, answerable experts.
Or whether they would like to see the scholarly milestones, consisting of the official, certified, published, answerable versions, vanish in a sea of moving targets, consisting of successive versions of unknown quality, crisscrossed by a tangle of reviews, commentaries and opinions of equally unknown quality.
Not that all the extras cannot be had too, alongside the peer-reviewed milestones: In our online age, no gate-keeper is blocking the public posting of unrefereed preprints, self-appointed commentaries, revised drafts, and even updates and upgrades of the published milestones -- alongside the milestones themselves. What is at issue here is whether we can do without the filtered, certified milestones themselves (until we once again reinvent peer review).
The question has to be asked seriously; and if one hasn't the imagination to pose it from the standpoint of a researcher trying to make tractable use of the literature, let us pose it more luridly, from the standpoint of how to treat a family member who is seriously ill: navigate the sludge directly, to see for oneself what's on the market? ask one's intrepid physician to try to sort out the reliable cure from the surrounding ocean of quackery? And if you think this is not a fair question, do you really think science and scholarship are that much less important than curing sick kin?
Eppur, eppur, what tugs at me on odd days of the week is the undeniable fact that most research is not cited, nor worth citing, anyway, so why bother with peer review for all of that? And on the other end, the authors of the very, very best work are virtually peerless, and can exchange their masterworks amongst themselves, as in Newton's day. So is all this peer review just to keep the drones busy? I can't say.
But I can say that it has nothing to do with Open Access (except that it can be obtruded, along with so many other irrelevant things, to slow OA's progress). If self-archiving mandates were adopted universally, all of this would be mooted. The current peer-reviewed literature, such as it is, would at long-last be OA -- which is the sole goal of the OA movement, and the end of the road for OA advocacy, the rest being about scholars and scientists making use of this newfound bonanza, whilst other processes (digital preservation, journal reform, copyright reform, peer review reform) proceed apace.
As it is, however, second-guessing the future course of peer review is still one of the at-least 34 pieces of irrelevance distracting us from getting round to doing the optimal and inevitable at long, long last...
Harnad, S. (1990) Scholarly Skywriting and the Prepublication Continuum
of Scientific Inquiry. Psychological Science 1 342 - 343.
Harnad, S. (1998) The invisible hand of peer review. Nature [online] (c. 5 Nov. 1998) Exploit Interactive version
Peer Review Reform Hypothesis-Testing (started 1999)
A Note of Caution About "Reforming the System" (2001)
Self-Selected Vetting vs. Peer Review: Supplement or Substitute? (2002)
Peer Review: Streamlining It vs. Sidelining It
American Scientist Open Access Forum
The traditional peer review system, flawed as it is, is still the best thing we have in terms of filtering and evaluating scientific articles. This is simply the function of the vast number of journals, scientists and reviewers who have a vested interest in it. The OA journals will only become a part of the mainstream once there is at least an equally trusted and reliable system in place for assessing the relative merit of any given article. At Naboj we are striving to provide one implementation of such a system, but before it can be truly useful and trusted a critical mass of (quality) reviewers needs to be reached.
Much as Stevan Harnad calls into question the potential that truly dynamic, answerable peer review from qualified referees could ever be achieved from an open, voluntary review process, so I once questioned the likelihood of finding complete and accurate articles in an online encyclopedia written by anyone with internet access and a few minutes. And yet, a striking phenomenon has occurred in the latter situation, as revealed by the recent study published in Nature and cited by Richard Poynder: the output of the masses begins to approximate that of the elite few.
The potential efficacy of open review should not therefore be so quickly dismissed. Studies comparing the quality of a submission as determined by traditional review with that obtained from an open review, using a suitable numerical rating system, would be of value. However, in order to truly compare the efficacy of traditional versus open peer review, an independent and unquestionable metric of the quality of a work is needed as a yardstick against which the accuracy of both systems can be measured. In science, such a metric comes only with time; the true value and accuracy of a work is laid bare only by the test of time. How much of that which had yet to become accepted fact at the time of submission eventually became accepted fact, and which type of review had the greater foresight in the matter? This means that a rigorous comparison of open versus traditional peer review is a long term prospect.
But it is a necessary endeavor; mere inference from the Wikipedia phenomenon is insufficient. An encyclopedia contains that which is already accepted fact, while a journal submission contains that which has yet to be accepted, and so I remain unconvinced that open review would be a valid substitute for traditional peer review, and that the same pattern would emerge for open review as has been demonstrated in the case of Wikipedia. I, like many others, no doubt, await rigorous demonstration of the efficacy of open review.
I do not understand why the discussion presupposes that a filtering process for academic journals either has to be done by peers OR by an indistinct mass of anonymous readers.
But why don't we start to think of mixed models of evaluation, in which the enormous advantages of collective filtering are merged with the reliability of the peer- review process?
Is it so difficult to imagine a mixed-system (as it actually is already, though still not in a structured form) in which judgments, rankings and evaluations have a different weight according to who says what to whom?
Publishers can still live with their for-happy-few-only publications whereas those whose first urge is not always the highest quality - but the need to get some idea about a unknown topic - can access a larger, though less reliable, amount of information.
Also, the role of peer-review process is not crucial for readers, but for writers and academics. We have to be published in peer-review journals if we want our articles to have impact in the Science Citation Index, that is, not real impact, but the virtual impact that maintains the academic system alive.
We should discuss the role of peer-reviewing in the construction of scientific reputation and academic ranking. That is the real reason why no academic is ready to get rid of peer-review yet.
Post a Comment