X

Would a lottery system improve #TrustInPeerReview?

An increasing number of agencies are using lottery systems to distribute research funding. Supporters of the approach have even suggested lotteries could be used by journals to select which papers to publish [1]. In this blog, we discuss how this would potentially work, and look at the pros and cons of such a system.

How would a lottery system work?

For the purposes of this blog, we propose a modified version of the system suggested by Brezis (2006) [2]. If a journal were to adopt this system, it would work broadly as follows: submitted papers would be subject to a basic editorial screening as per usual. Papers unanimously deemed deserving of publication by reviewers would be processed as normal. Papers unanimously deemed undeserving of publication would be rejected. Papers which were graded differently by reviewers would be put into a lottery draw at the discretion of the editor; winning papers would be published (following appropriate revision).

Pros

It would level the playing field. Reviewers have been shown to be affected by various kinds of cognitive and institutional bias [3, 4]. For example, reviewers tend to be less critical of papers authored by investigators with prestige, i.e., those with “the right pedigree, coming from the right institutions, who worked for the right people in the right field at the right time” [5]. And the expression of social bias has been shown to increase with increased uncertainty in evaluative contexts [discussed in 6]. A lottery system would give those at a disadvantage, for example, new investigators and those from less prestigious institutions, an opportunity to compete with the “old boys” [3]. This would lead to an increase in the “epistemic diversity, fairness, and impartiality within academia” [7].

It would reduce peer pressure. Peer review is social [3]. Editors often enjoy formal or informal relationships with authors—for example, those who regularly contribute to their journal—and may experience a form of peer pressure to treat such authors favourably [8]. A lottery system would make it easier for editors to manage such situations.

It would make the peer review process more transparent. Authors are generally unaware of the specifics surrounding a decision to publish a paper. Often, the decisions are arbitrary, as highlighted by an experiment run by the organisers of the Conference on Neural Information Processing Systems [9]. In 2014, the conference selection committee was split in two, but 10% of submissions (166 papers) were reviewed by both halves. About 57% of the papers accepted by one committee were rejected by the other and vice versa, only slightly superior to what would be achieved by random chance. This is only one of many studies showing low levels of agreement among reviewers [discussed in 10]. A lottery system would be an acknowledgement of the inherent limitations and arbitrariness of peer review [11].

It may reduce conservatism. Evidence of conservatism in peer review has been reported [3]. In one study, reviewers displayed a significant bias against studies supporting unconventional medical treatments, even though the supporting data were as valid as for the more conventional treatments  [12]. A lottery system would neutralise this bias and may even encourage the pursuit of more innovative and unorthodox projects.

It would be better for researcher morale. As pointed out by Fang and Casadevall (2016) [11], a lottery system would “lessen the blow of … rejection, since it is easier to rationalize bad luck than to feel that one failed to make the cut due to a lack of merit”. Similarly, it may promote humility among those who benefit the most from the current system [1].

Logistically easier to implement. Compared to other initiatives to improve peer review quality and reduce bias—for example, blinding and reviewer training—a lottery system would be cheaper and easier to organise. And it may prove more effective. Blinding is considered by some to be pointless as reviewers can successfully identity authors 25%–40% of the time [discussed in 3]. And a randomised controlled trial of the effects of reviewer training showed only a slight improvement that was not maintained over time [13].

It may reduce an editor’s workload. Deciding among many, equally meritorious papers is a time-consuming endeavour. A lottery system, whereby editors would have to sort papers into two categories, non-meritorious and meritorious, may save time and resources, as less information and debate would be required.

Cons

Poor optics. A lottery system could affect public trust in scientists, as it could be perceived by some as a sign of chaos within the scientific community or even laziness. As stated by a staff member from an Australian funding agency, “It would make it look like we don’t know what we’re doing” [14]. The peer review process is highly valued by the majority of researchers worldwide [15] and for the many who take their peer review duties very seriously, it could be seen as provocative. Participating journals may lose valued reviewers for this reason, as reviewers, especially those opposed to the concept, may refuse to review papers whose fate would ultimately be decided randomly. In addition, readers, again especially opponents of the concept, may become biased against “winning” papers.

Won’t completely eliminate bias. Bias towards the authors may still occur, for example, during the initial editorial screening or during peer review, which could prevent potentially meritorious papers from even reaching the lottery draw. Other forms of bias, for example, the tendency for journals to favour positive over negative outcomes and peer review fraud [16] would likely remain an issue.

May detract from other (potentially better) alternatives to traditional peer review, for example, open peer review. Bedessem (2020) [17], an opponent of the lottery system, questioned whether it is “simply a last resort solution”, which would suppress the peer review process rather than reform it. There has been a proliferation of peer review reforms in recent years [18], the potential benefits and downsides of which are yet to be fully realised. A lottery system could shift investment and interest away from potentially better alternatives.

Pioneering journals may suffer an unmanageable spike in submissions. A lottery system could potentially lead to an increase in “have-a-go” papers. Along with this, if authors only require one positive review to be considered for the lottery draw, rather than a consensus among reviewers and editors, it could result in an increase in substandard submissions. Or some authors could play the odds by repeatedly submitting a paper until it “won”.

Conclusions

Ironically, since its inception, it has been taken on faith that peer review is the best system for quality assurance in science. Once people began to look inside the “black box” [19] in the 1990s and test the process empirically, it was found that it could not be validated objectively. In the case of Daubert v. Merrell Dow, the US Supreme Court determined that peer review could not be used as unequivocal evidence of the validity of a scientific expert’s testimony [discussed in 20]. As observed by Drummond Rennie, the founder of the annual International Congress on Peer Review and Biomedical Publication, “If peer review was a drug it would never be allowed onto the market” [discussed in 21]. Despite the lack of empirical evidence to support its value [22], peer review persists. There could be any reasons for this, for example, “vested interests” [21], or it could be because “the wheels of change turn slowly” or because there are, as yet, no proven alternatives. There are no guarantees that a lottery system would fare any better than any other reform measure; however, it should at the very least be tested. As stated by Susan Guthrie, “there needs to be more experimentation and openness regarding peer review – reflecting the inherent limitations and indeed the unacknowledged element of chance already present within the current system” [23].

References

  1. Adam D. Science funders gamble on grant lotteries. Nature. 2019 Nov 20;575(7785):574-5. Available from: https://www.nature.com/articles/d41586-019-03572-7
  2. Brezis E. Focal Randomization: An optimal mechanism for the evaluation of R&D. In DEGIT Conference Papers 2006 Jun (No. c011_035). DEGIT, Dynamics, Economic Growth, and International Trade. Available from: https://ideas.repec.org/p/deg/conpap/c011_035.html
  3. Lee CJ, Sugimoto CR, Zhang G, Cronin B. Bias in peer review. Journal of the American Society for Information Science and Technology. 2013 Jan;64(1):2-17. Available from: http://faculty.washington.edu/c3/Lee_et_al_2013.pdf
  4. Tomkins A, Zhang M, Heavlin WD. Reviewer bias in single-versus double-blind peer review. Proceedings of the National Academy of Sciences. 2017 Nov 28;114(48):12708-13. Available from: https://www.pnas.org/content/114/48/12708?collection=&utm_source=TrendMD&utm_medium=cpc&utm_campaign=Proc_Natl_Acad_Sci_U_S_A_TrendMD_0
  5. Cranford S. The pursued, the pursuing, and unconscious prestige bias. Matter. 2020 May 6;2(5):1065-7. Available from: https://www.cell.com/matter/pdf/S2590-2385(20)30179-X.pdf
  6. Erosheva EA, Grant S, Chen MC, Lindner MD, Nakamura RK, Lee CJ. NIH peer review: Criterion scores completely account for racial disparities in overall impact scores. Science Advances. 2020 Jun 1;6(23):eaaz4868. Available from: https://advances.sciencemag.org/content/6/23/eaaz4868?intcmp=trendmd-adv
  7. Roumbanis L. Peer review or lottery? A critical analysis of two different forms of decision-making mechanisms for allocation of research grants. Science, Technology & Human Values. 2019 Nov;44(6):994-1019. Available from: https://journals.sagepub.com/doi/abs/10.1177/0162243918822744
  8. Lipworth WL, Kerridge IH, Carter SM, Little M. Journal peer review in context: a qualitative study of the social and subjective dimensions of manuscript review in biomedical publishing. Social Science & Medicine. 2011 Apr 1;72(7):1056-63. Available from: https://ro.uow.edu.au/cgi/viewcontent.cgi?article=4714&context=sspapers
  9. Price, E. The NIPS experiment. 2014. Available from: http://blog.mrtz.org/2014/12/15/the-nips-experiment.html
  10. Smaldino PE, Turner MA, Contreras Kallens PA. Open science and modified funding lotteries can impede the natural selection of bad science. Royal Society Open Science. 2019 Jul 10;6(7):190194. Available from: https://royalsocietypublishing.org/doi/10.1098/rsos.190194
  11. Fang FC, Casadevall A. Research funding: The case for a modified lottery. mBio. 7(2):e00422-16. 2016. Available from: https://mbio.asm.org/content/mbio/7/2/e00422-16.full.pdf
  12. Resch KI, Ernst E, Garrow J. A randomized controlled study of reviewer bias against an unconventional therapy. Journal of the Royal Society of Medicine. 2000 Apr;93(4):164-7. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1297969/pdf/10844878.pdf
  13. Schroter S, Black N, Evans S, Carpenter J, Godlee F, Smith R. Effects of training on quality of peer review: randomised controlled trial. British Medical Journal. 2004 Mar 18;328(7441):673. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC381220/
  14. Barnett AG. Funding by lottery: political problems and research opportunities. mBio. 2016 Sep 7;7(4). Available from: https://mbio.asm.org/content/7/4/e01369-16
  15. Mulligan A, Hall L, Raphael E. Peer review in a changing world: An international study measuring the attitudes of researchers. Journal of the American Society for Information Science and Technology. 2013 Jan;64(1):132-61. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1002/asi.22798
  16. Haug CJ. Peer-review fraud—hacking the scientific publication process. New England Journal of Medicine. 2015 Dec 17;373(25):2393-5. Available from: https://www.nejm.org/doi/full/10.1056/NEJMp1512330
  17. Bedessem B. Should we fund research randomly? An epistemological criticism of the lottery model as an alternative to peer review for the funding of science. Research Evaluation. 2020 Apr 1;29(2):150-7. Available from: https://academic.oup.com/rev/article-abstract/29/2/150/5678703
  18. Chan L, Loizides F. New toolkits on the block: Peer review alternatives in scholarly communication. Expanding Perspectives on Open Science: Communities, Cultures and Diversity in Concepts and Practices. 2017 Jun 20:62. Available from: https://goedoc.uni-goettingen.de/bitstream/handle/1/14511/Schmidt_Gorogh_ELPUB_2017_STAL9781614997696-0062.pdf?sequence=1
  19. Smith R. Peer review: reform or revolution?: Time to open up the black box of peer review. BMJ. Available from: https://www.bmj.com/content/315/7111/759.full
  20. Horrobin DF. Something rotten at the core of science? Trends in Pharmacological Sciences. 2001 Feb 1;22(2):51-2. Available from: https://www.pulsedtechresearch.com/wp-content/uploads/2013/04/Something-Rotten-Horrobin.pdf
  21. Smith R. The peer review drugs don’t work. Times Higher Education. 2015 May 28;28. Available from: https://www.timeshighereducation.com/content/the-peer-review-drugs-dont-work
  22. Richards D. Little evidence to support the use of editorial peer review to ensure quality of published research. Evidence-based Dentistry. 2007 Sep;8(3):88-9. Available from: https://www.nature.com/articles/6400516
  23. Guthrie S. Innovation in the research funding process: Peer review alternatives and adaptations. Available from: https://www.academyhealth.org/sites/default/files/innovatingresearchfundingnovember2019.pdf
Lisa Clancy:
Related Post