dev, growth & diff
Ecophys, stress
photobiology &photosynth
Biochemistry and metabilsim-smaller
Computanional-grey
Biotic interactions
Smart agronomy
previous arrow
next arrow

AI in the peer review process

AI in the peer review process

The peer-review system is under increasing pressure due to the exponential growth in submitted manuscripts. Between 2016 and 2022, there was an estimated 47% increase in published articles, while the pool of qualified reviewers has not grown at the same pace (Fox et al., 2023; Peterson et al., 2022). The difficulty in finding reviewers and concerns related to effectiveness, fairness, and efficiency has prompted the search for solutions and turned attention to the development and use of artificial intelligence (AI).

The integration of AI into many aspects of our lives is advancing fast, and its role in the peer-reviewing process is no exception. The development of AI offers the potential to streamline certain parts of the process, enhance quality control, and reduce inefficiencies. However, it comes with limitations and raises some ethical concerns.
AI tools have already been implemented to automate repetitive tasks, such as plagiarism checks, identify potential conflicts of interest, and conduct initial quality assessments. These tools may increase the efficiency and speed of the review process by taking care of time-consuming checks. For instance, at Physiologia Plantarum, we have been using the AI-driven tool iThenticate for years to detect potential plagiarism and ensure the originality of the submitted manuscripts.

In addition, AI tools can help journals in the initial screening of manuscripts, making sure that submissions meet basic criteria such as formatting and adherence to journal guidelines. Using natural language processing (NLP) algorithms or custom-trained AI models, these tools can quickly assess the manuscript’s relevance to the journal’s scope and research areas. This can help reduce the workload of human reviewers and editorial staff, allowing them to focus on the more fundamental scientific aspects of each submission. An estimation from 2018 suggests that over 15 million hours are spent every year reviewing manuscripts that other journals have already rejected and resubmitted elsewhere (https://www.aje.com/arc/peer-review-process-15-million-hours-lost-time/). Although many rejections after review are due to issues with the scientific quality or methodology rather than scope, AI can still help reduce inefficiencies by matching manuscripts with appropriate journals from the outset, which could free up reviewer time, allowing them to concentrate on manuscripts that align with the journal expectations. Moreover, AI can assist in transferring peer reviews between journals, which helps reduce redundant efforts. By analyzing reviewer reports and manuscript details, AI systems can recommend suitable alternative journals within the same publishing group or beyond when a manuscript is rejected but still deemed scientifically valuable. Some publishers and platforms are already exploring or implementing such review transfer systems. At Physiologia Plantarum, we use the Transfer Desk Assistant, a system introduced by Wiley that uses machine learning to analyze manuscripts to offer authors alternative journals for papers that have been rejected from their initial choice. If the manuscript is transferred, the original reviews are visible to the editor of the receiving journal.

Furthermore, AI is now being explored to support the peer review process more directly. For example, a webinar hosted by Peer Review Week (https://peerreviewweek.wordpress.com/home/events-and-activities-2024/) will introduce Eliza, the first AI-driven peer review tool designed to assist reviewers. By leveraging advanced algorithms, Eliza assists reviewers in various ways, including providing automated feedback on manuscripts, conducting consistency checks to ensure adherence to editorial guidelines, and analyzing trends to identify potential biases. Additionally, the tool offers suggestions for relevant literature and methodologies, enabling reviewers to make informed assessments more quickly. Eliza aims to reduce the burden on reviewers by streamlining these essential tasks, ultimately contributing to a more rigorous and effective peer review process. The tool also compiles concise, data-driven summaries for editors, facilitating faster and more efficient decision-making once the peer review reports are submitted.

There are increasing reports regarding the use of AI in actual peer reviewing. Liang et al. (2023) looked at the agreement between comments from human peer reviewers and GPT-4 in close to 5,000 submitted manuscripts and found an overlap between comments above 30%. The overlap between two human reviewers was similar. They reported that more than half of the roughly 300 authors asked within the field found the AI-generated reviews helpful or very helpful, and 82% found it more helpful than feedback from at least some human reviewers.
However, several limitations have also been reported (Liang et al., 2023, Biswas et al., 2023). For example, ChatGPT tends to focus on certain aspects, such as identifying methodological flaws and determining the contribution of the research to their respective fields. In addition, it also shows limitations in contextual understanding and the ability to understand broader implications or subtle nuances, potentially missing important considerations that a human reviewer can detect.

To integrate ChatGPT or other AI reviewers into the peer review process, it is essential to define clear objectives for their role, ensuring that both AI and human reviewers understand how the tool should be used. Transparency and disclosure are critical, with authors and reviewers fully informed about when and how AI is utilized. The model must be carefully trained, calibrated, and evaluated. Ethical concerns and biases must be addressed, particularly fairness and the potential for AI to reinforce existing biases.
With that said, one of the major challenges in peer reviewing is human bias. Research shows that when reviewers were told that the authors were Nobel laureates, only 23% recommended rejection, compared to 48% when the authors were anonymous and 65% when the authors were relatively unknown (Huber et al., 2022).
Similarly, one of the largest trials comparing double-blind to single-blind reviews found that a single-blind review process resulted in a substantial bias in favour of higher-income and/or English-speaking authors (Fox et al., 2023). AI could help address such biases by identifying patterns related to gender, institutional affiliation, or country of origin. Detecting and rectifying these biases can lead to more fair and equitable review processes.

Moreover, AI is already being used to try and streamline reviewer selection. For example, the Web of Science Reviewer Locator by Clarivate), used by Physiologia Plantarum, incorporates AI to recommend potential reviewers based on their expertise and previous publications.
While AI holds great promise for improving the peer review process, it is important to acknowledge that there are limitations. AI systems are still heavily dependent on the quality of the data they are trained on, and without careful oversight, they may reinforce existing biases or make flawed recommendations. Moreover, the human element, expert judgment, intuition, and the nuanced understanding of scientific work remain irreplaceable. AI should be viewed as a tool to support, and it will likely also be of great value in detecting fabrications, falsifications and image manipulation.

Biswas S, Dobaria D, Cohen HL. ChatGPT and the Future of Journal Reviews: A Feasibility Study. Yale J Biol Med. 2023 Sep 29;96(3):415-420. doi: 10.59249/SKDH9286. PMID: 37780993; PMCID: PMC10524821.
Hanson MA, Barreiro PG, Crosetto P. The strain on scientific publishing. arXiv, preprint: not peer-reviewed. https://arxiv.org/ftp/arxiv/papers/2309/2309.15884.pdf

Huber J, Inoua S, Kerschbamer R, Konig-Kersting C, Palan S, Smith VL. Nobel and novice: author prominence affects peer review. Proc Natl Acad Sci USA. 2022; 119:e2205779119. https://doi.org/10.1073/pnas.2205779119
Fox, C. W., Albert, A. Y. K., & Vines, T. H. (2017). Recruitment of reviewers is becoming harder at some journals : A test of the influence of reviewer fatigue at six journals in ecology and evolution. Research Integrity and Peer Review, 2(1), 3, s41073- 017-0027-x. https://doi.org/10.1186/s41073-017-0027-x

Fox, C. W., Meyer, J., & Aimé, E. (2023). Double-blind peer review affects reviewer ratings and editor decisions at an ecology journal. Functional Ecology, 37, 1144–1157. https://doi.org/10.1111/1365-2435.14259

Liang W, Zhang Y, Cao H, et al. Can large language models provide useful feedback on research papers? A large-scale empirical analysis. arXiv, 2310.01783; October 3, 2023. https://doi.org/10.48550/arXiv.2310.01783
Peterson, C. J., Orticio, C., & Nugent, K. (2022). The challenge of recruiting peer reviewers from one medical journal’s perspective. Proceedings (BaylorUniversity. Medical Center), 35(3), 394-396. https://doi.org/10.1080/08998280.2022.2035189

 

Leave a Reply

Your email address will not be published. Required fields are marked *