In stark contrast to current technologies, the vast majority of scholarly research is still published using pipelines designed for the 18th century printing press. In the traditional publishing pipeline, authors send their article to an editor, who decides to either send it out for peer review or reject it outright. In the former case, a small number of expert reviewers are recruited to provide their opinion of the research, which informs the editor’s decision to publish the article, invite a resubmission, or reject it outright. If the article is rejected – either with or without peer review – the authors will then typically submit their work to another journal, repeating the process until an outlet is found. This evaluation process is highly inefficient and has numerous harmful effects on the research ecosystem (Nosek & Bar-Anan, 2012; Brembs, 2019; Stern & O’Shea, 2019). Editorial selection enhances publication bias in line with the (largely unknown) priorities of publishers (REF), peer reviewers and editors (e.g., toward novelty at the expense of reliability; Brembs, 2019). Multiple rounds of review and rejection can delay research dissemination (REF) and place a heavy burden of time on authors, who spend more than an average working week every year reformatting articles (REF), as well as reviewers and editors, who unknowingly re-evaluate articles that have already been reviewed elsewhere (Brembs et al., 2013; Nosek & Bar-Anan, 2012; Stern & O’Shea, 2019). Finally, and most importantly, the hierarchical journal system promotes undesirable behaviours in researchers and the institutions that evaluate them, by prioritising “getting it published” rather than “getting it right” (Stern & O’Shea, 2019). Collectively, these problems create large opportunity costs for the research community (and broader public) and have contributed to the burgeoning ‘replication crisis’ spanning multiple research fields (e.g., psychology, neuroscience, genetics, ecology; Open Science Collaboration, 2015).
The future of scholarly communication and evaluation
Modern technologies make available a myriad of possibilities when it comes to conducting, communicating and evaluating scholarly research. Recent years have seen an explosion of interest in moving beyond the traditional scholarly journal model and toward more progressive, digitally-based communication and evaluation models (Tennant, 2018). Many of these models centre on the notion of post-publication peer review: the idea that the traditional ‘review-then-publish’ order should be reversed, such that research is communicated first and evaluated second (‘publish-then-review’) in an ongoing, dynamic, and collaborative fashion (Yarkoni, 2012); Nosek and Bar Anon, 2012; Stern & O’Shea, 2019; Wang and Zhan, 2019). Even more radical proposals (e.g., Octopus, Libscie, Flashpub) argue for an entirely new communication substrate, which would segment the research article into smaller ‘modules’ that can be communicated ‘as-you-go’ throughout the research lifecycle (e.g., literature review, methods, data, analysis, etc.). Although the details of these proposals differ, they share a common vision of moving toward a crowd-sourced evaluation space in which researchers can directly contribute content (articles, reviews, ratings, etc.) and flexibly filter the literature according to their needs. Clearly, designing and building a new publishing system is not enough – researchers must actually use it for it to hold value. Unfortunately, the vast majority of PPPR proposals remain unbuilt or, in those cases where they have been built, underutilised. For example, the PLOS PPPR system was recently decommissioned after 17 years of minimal use (REF) and various platforms that collect PPPR ratings have amassed less than a few hundred ratings over the course of several years (e.g., Science Open, PREreview). Here, I argue that the principal reason for this lack of uptake is not due to a lack of support for the model itself, but systemic factors that resist the adoption of new technologies and – unless we specifically address these factors – will likely continue to impede the adoption of PPPR systems into the future. In my next post, I will describe what I call the ‘prestige problem’, which I believe is the key obstacle in the way of systemic reform in academia.
Thanks for reading! Please subscribe by clicking the paper aeroplane icon and let me know any thoughts you might have in the comments below.