This article is chapter 2.3 in section 2 of a series of articles summarising the book Reassembling Scholarly Communications: Histories, Infrastructures, and Global Politics of Open Access.
In the third chapter of the knowledge cultures section, David Pontille and Didier Torny look at the emergence of new arrangements between dissemination and validation in peer review, and how they are being shaped by citation, commenting, sharing, and examining. These technologies have existed for a long time, but are now being more and more treated as integral to open peer review.
Peer review as reading
Pontille and Torny advise that who reads, when, and to what purposes are key to understanding the shape of peer review. Who exactly assesses manuscripts submitted to journals? What are the actual conditions under which peer review is performed? How do different instances of judgement precisely coordinate with one another?
Throughout the history of peer review, the three judging instances (editors-in-chief, editorial committees, outside reviewers) that have gradually emerged were the first readers of manuscripts submitted to academic journals.
Moving forward to the current time, manuscript evaluation has itself become an object of systematic scientific investigation. Authors, manuscripts, reviewers, journals, and readers have been scrupulously examined for their qualities and competencies, as well as for their “biases,” faults, or even unacceptable behavior. Hundreds of studies on manuscript evaluation are now available, while journals are put to the test with duplicate or fake papers. The diverse arrangements of manuscript evaluation are thus themselves systematically subjected to evaluation procedures.
Peer review in the twenty-first century can also be distinguished by a growing trend: the empowerment of “ordinary” readers as new key judging instances. If editors and reviewers produce judgements, it is through a reading within a very specific framework, as it is confined to restricted interaction, essentially via written correspondence, which aims at authorizing the dissemination of manuscripts-become-articles.
Other forms of reading now accompany publications and participate in their evaluation, independently of their initial validation. This is particularly the case through citation, commenting, sharing, and examining.
From among all the texts they have read, readers choose those which they believe to be of essential value, so as to reference them in their own manuscripts. However, the act of referencing relates to a given author, whereas a citation is a new property of the source text.
With the popularization of bibliometric tools, citation counting has become a central element of journal and article evaluation. This has radically modified referencing practices and literally created a new “citation culture.” Under this condition, academic readers have become citers from the 1970s on, adding their voices to the already-published article and to the journal which validated it.
This citing activity pertains to journals (e.g., impact factor), to articles (e.g., article-level metrics), to authors (e.g., h-index), or even to entire disciplines (e.g., half-life index) and institutions (e.g., a score for all international rankings). Using citation aggregation tools, it is possible to equitably assess all citers or else to introduce weighting tools relating to time span, to the reputation of the outlet, to their centrality, and so on.
Citation thus points toward two complementary horizons of reading: science as a system for accumulating knowledge via a referencing operation, and research as a necessary discussion of this same knowledge through criticism and commentary.
Commenting on texts
Readers can be given a more formal place as commenters. Traditionally, before an article was published, comments were mainly directed toward the editor-in-chief or the editorial committee. Through open review, commenters enter into a dialogue with the authors and thus open up a space for direct confrontation.
The publication of commentaries alongside the articles themselves has existed for some time and is not a new phenomenon: “special issues” or “reports” in which a series of articles are brought together around a given theme to feed off one another after a short presentation. Similarly, the long-standing practice of a commentary followed by the author’s response is common.
Finally, proposals have been made to revamp the traditional role of post-publication commenters. For a long time, these commenters acted in two elementary forms: by referring to the original article or by sending a letter to the editor. From the 1990s, the emergence of electronic publications was seen as something that would revolutionize “post-publication peer review” (PPPR), by allowing comments and criticisms to be added to the document itself. However, the experiments of open commentary in PPPR have been disappointing for traditional (e.g., Nature) and new (e.g., PLOS ONE) electronic journals, as few readers seem to be willing to participate in such a technology if their comments serve no identifiable purpose.
The readers mentioned so far have been peers of the authors of the original manuscript in a very restrictive sense: either their reading leads to a text of an equivalent nature, or it leads to a text published in the same outlet as the article.
Until recently, readers other than citers and commenters remained very much in the shadows. Yet library users, students in classes, and colleagues in seminars, as just a few examples, also ascribe value to articles; for instance, through annotation. But two major changes have rendered part of these forms of reading valuable.
The existence of articles in electronic form has made their readers more visible. People who access an “HTML” page or who download a “PDF” file are now taken into account, whereas in the past it was only the distribution of journals and texts, mostly through libraries, which allowed assessment of potential readership. By inventorying and aggregating the audience in this way, it is possible to assign readers the capacity to evaluate articles. Labels such as “highly accessed” or “most downloaded,” frequently used on journal websites, make it possible to distinguish certain articles. The creation of online academic social networks such as ResearchGate and Academia not only count “academic users,” but also name them and offer contact. Researchers now take part in the dissemination of their own articles and are thus better able to grasp the extent and diversity of their audiences.
At the same time, other devices make visible the sharing of articles. Online bibliographic tools (e.g., CiteULike, Mendeley, Zotero) raise the profile of the readers and taggers who introduce references and attached documents into their bibliographic databases. Without being citers themselves, these readers select publications by sharing lists of references, the pertinence of which is notified by the use of “tags.” These reader-taggers are also embedded in the use of hyperlinks within “generalist” social networks (e.g., Facebook, Twitter), by alerting others to interesting articles, or by briefly commenting on their content.
The measurements of these different channels are now aggregated and considered to be a representation of a work’s multiple uses and audiences. For its advocates, the resulting “total impact” is the true value of an article’s importance shown through its dissemination. Here the readers, tracked by number and diversity, revalidate articles in the place of the judging instances historically qualified to do so.
This movement is even more significant in that these tools are applied not only to published articles but also to documents which have not been validated through journal peer review. Indeed, after the establishment of the arXiv high-energy physics repository at the beginning of the 1990s, many scientific milieus and institutions acquired repository servers to host working papers.
Ideally, these manuscripts are preliminary versions submitted for criticism and comments by specialist groups that are notified of the submissions. The resulting exchanges are managed by the system, which archives the different versions produced. So readers do not simply exercise their judgment on validated articles, but also produce a collective evaluation of manuscripts. This flow of electronic manuscripts feeds the enthusiasm of the most visionary who, since the 1990s, have been announcing the approaching end of validation by journals’ traditional judging instances.
Nevertheless, new technologies have been built on these archives, such as “overlay journals,” in which available manuscripts are later validated by reading peers. New journals have reembodied the old scholarly communication values of rapidity and open scientific discussion, by offering a publishing space to working papers, such as PeerJ, or by publishing manuscripts first, then inviting commenters to undertake peer review and pushing authors to publish revised versions of their texts, such as F1000Research.
With a view to dissemination, advocates of readers as a judging instance tend to downplay the importance of prior validation. While the validation process sorts manuscripts in a binary fashion (accepted or rejected), such advocates contend that varied forms of dissemination instead encourage permanent discussion and argument along a text’s entire trajectory. In this perspective, articles remain “alive” after publication and are therefore always subject not only to various reader appropriations, but also to public evaluations, which can reverse their initial validation.
The PubPeer website, which offers anonymized readers the opportunity to discuss the validity of experiments and to ask authors to answer their questions, is a good example of this kind of PPPR. The discussions occurring on this platform regularly result in the debunking of faked and manipulated images from many high-profile articles, which leads to corrections and even retractions of the publications by the journals themselves.
In concluding, Pontille and Torny state that the extension of judging instances to readers can appear as a reallocation of expertise, empowering a growing number of people in the name of distributed knowledge.
In an ongoing context of revelations of massive scientific fraud, which often implicates editorial processes and journals themselves, the dereliction inherent to judging instances prior to publication has transformed the mass of readers into a vital resource for unearthing error and fraud. As in other domains where public expertise used to be exclusively held by a few professionals, crowdsourcing has become a collective gatekeeper for science publishing.
Thus, Pontille and Torny advise that peerdom shall be reshaped, as lay readers now have full access to a large part of the scientific literature and have become valued audiences as quantified end users of published articles.
Next part (chapter 2.4): Intersections between artistic making and scientific knowing.
Article source: This article is an edited summary of Chapter 71 of the book Reassembling scholarly communications: Histories, infrastructures, and global politics of Open Access2 which has been published by MIT Press under a CC BY 4.0 Creative Commons license.
Article license: This article is published under a CC BY 4.0 Creative Commons license.
- Pontille, D., & Torny, D. (2020). Peer Review: Readers in the Making of Scholarly Knowledge. In Eve, M. P., & Gray, J. (Eds.) Reassembling scholarly communications: Histories, infrastructures, and global politics of Open Access. MIT Press. ↩
- Eve, M. P., & Gray, J. (Eds.) (2020). Reassembling scholarly communications: Histories, infrastructures, and global politics of Open Access. MIT Press. ↩
Also published on Medium.