In 2002, Parks forecast that the new breed of free-to-publish and free-to-read (i.e., so-called ‘platinum’ or ‘diamond’ open access) electronic journals would gain little traction in the research community, who would instead continue to publish their research in high-impact, privately-owned legacy journals (Parks, 2002). According to Parks’ analysis, every actor in the game of academic publishing – authors, editors, referees, publishers, librarians, department chairs/deans, universities, and readers/users – had powerful incentives to maintain the status quo. Nearly two decades later, Parks’ prognostication has proven startlingly accurate: platinum open access journals remain marginalised, and the five largest academic publishers (the ‘publishing oligopoly’) have increased their control of the scholarly publishing market rather than decreased it (Larivière, 2015).
Moreover, the incentives laid out by Parks in 2002 continue to motivate the various actors today. At the core of these incentives is the notion of journal ‘prestige’ (or reputation). Because citations are considered to be the ‘gold standard’ for measuring impact, and journals are the dominant method for communicating research, the journal impact factor (JIF) – which approximates the average number of times articles in a particular journal have been cited in the two preceding years – has become the de facto heuristic for measuring quality and prestige in academia (The PLoS Medicine Editors, 2006) despite numerous documented problems with the metric (Brembs, 2019). Accordingly, journals are incentivised to accept only the most impactful articles for publication, universities and funders are incentivised to reward applicants who publish in (and review or edit for) prestigious journals, and reviewers and editors are incentivised to donate their time to high-JIF journals, which offer elevated prestige and reputation in the community. In turn, authors are incentivised to publish in high-JIF journals (Nosek & Bar-Anan, 2012), engage in questionable research practices that increase their chances of a prestigious publication and cite articles published in high-impact journals (irrespective of the actual quality of the work). The net result of these highly interdependent cultural practices is a ‘self-fulfilling prophecy’ (Kriegeskorte, 2012) that maintains legacy journals at the top of the hierarchical journal system and thwarts new entrants to the scholarly publishing marketplace.
Here, I term this problem the ‘prestige problem’ and argue that it is the principal impediment to academic reform, both within and beyond the scholarly publishing system. I suggest that any progressive model seeking to reform scholarly publishing should directly address this problem and demonstrate a clear path toward developing journal-level ‘prestige’ if it is to gain widespread traction in the research community. Moreover, I argue that previous proposals for progress in scholarly publishing have typically not addressed the prestige problem sufficiently. For example, the pay-to-publish open access ‘megajournal’ model (e.g., PLOS One, which was not foreseen by Parks) aims to mitigate publication bias and wasted resources by publishing all articles that meet a minimum level of rigour, irrespective of the novelty of their claims (e.g., PLOS One). Because it does not filter according to impact, however, the megajournal cannot develop a high impact factor score and is thus destined to attract a modicum of prestige. In turn, researchers are likely to reserve their best work for high-impact journals, and treat PLOS One as a lower rung within the journal hierarchy. Post-publication peer review systems face a comparable dilemma: because all content is published first and evaluated second, JIF scores for these systems will inevitably be low and likely dissuade participation by the mainstream research community. Proponents for high-volume systems often argue that article-level or researcher-level metrics will arise to replace the JIF (e.g., alternative metrics). For example, some PPPR models include a mechanism to generate ‘researcher reputation scores’ based on the peer-rated value that individuals contribute to the system (e.g., see Yarkoni, 2012). Certainly, alternative metrics may play an increasing role in academic evaluation in years to come, but for alternative metrics to supplant JIF and other journal-level metrics, there would need to be a widespread paradigm shift encompassing multiple actors in the academic publishing game. Considering the slow progress over the preceeding decades and the fact that all of the actors in the publishing game still have powerful incentives to maintain the status quo, it seems unlikely that such a paradigm shift will occur in the short or even medium term. Thus, under the current academic paradigm, it is difficult to see how previous proposals – as forward thinking and desirable their qualities may be – will be able to capture a significant share of the high-quality research content normally reserved for traditional journals.
To emphasise this point, I note that many different PPPR systems have been developed over the past two decades, but none have yet accrued a high level of community participation. For example, PLOS One implemented an article-level commenting system that has seen very little uptake in the intervening 18 years (Yarkoni, 2012). More recently, ScienceOpen has garnered only a few hundred article reviews over the last X years, and Outbreak Science Rapid PREreview has garnered only 165 reviews, despite being launched immediately prior to the coronavirus pandemic. Presumably, when time-pressed researchers are faced with the choice between contributing content to progressive systems and those that have a direct influence on their careers, the majority of researchers choose the latter. Thus, I argue that rapid progress within scholarly publishing will require developing a progressive, community-controlled model that can nevertheless generate journal-level ‘prestige’ and thus succeed within the current impact-based journal paradigm. Once a critical mass of researchers have adopted this model, the community could then rapidly evolve the model in line with their needs and the principles of science itself. In the next section, I will outline my proposal for a highly scalable model that I believe could become the ‘kernel’ for rapid adoption and subsequent evolution into a scholarly publishing system fit for the 21st century.
Thanks for reading! Please subscribe by clicking the paper aeroplane icon and let me know any thoughts you might have in the comments below.