
Rethinking Peer Review for Open Access Books in the AI Era
Peer review has come a long way since its beginnings in the 17th century and later development in the 19th century. It involves – and still does – externally validating scientific communication, a process usually undertaken by a panel of experts with authority on the subject matter in question.
Now, in the 21st century, the reviewing process for scholarly content is receiving support from advancements in AI. Some see this as a huge opportunity, given that the traditional peer review model today faces issues including reviewer burnout, qualified reviewer shortages, and rising submission volumes.
So, how might we go about rethinking peer review in the AI era to maximize its potential and work in harmony with human reviewers? And how might AI help advance the reviewing and production of academic books?
What is peer review?
Peer review typically involves independent experts evaluating the integrity and suitability for publication of an academic text. During the review period, work tends to undergo a few rounds of external assessment to ensure that it’s up to the best possible standard.
Peer review is not just about maintaining quality. It’s also a universally recognized way of ensuring that publications adhere to ethical standards and don’t allow instances of misinformation to enter the research ecosystem.
Currently, peer review is more established for academic journals than it is for books. Before we look at rethinking peer review for books in the AI era, what are the specifics of the model for books?
Peer review for open access books
Peer review for open access books is less standardized than for journals. This is arguably because of the more complex nature of the books peer review model.
But what are the reasons for this added complexity? Here are a few key differences:
- Diversity of open access books
- Size of reviewer pool
- Open access books are aimed at a more general audience
Open access books come in various forms and formats, each with distinct structures and review needs. The diversity of books makes it challenging to apply a uniform review process. This is especially true for certain disciplines and for certain types of books. Add to this the increased length of books compared to journals, and you suddenly have a much more open-ended process.
Due to the pace of open access book publishing, the volume of book publications is much lower than journal articles. This means that MDPI Books does not have a large pool of active reviewers compared to MDPI journals. Typically, it will be harder to find reviewers for academic books than for journals.
Finally, books often do not have the same audience as journal articles. Journal articles often cater to audiences of niche fields using specialist knowledge. However, books are intended for a more general audience. This raises questions about the appropriateness and necessity of the traditional peer review for books, a process usually designed to validate the rigor, originality, and significance of new research findings.
Rethinking peer review for open access books in the AI era
Despite these difficulties and differences, peer review for open access books is still an important process to maintain. No matter what the publication type, peer review remains a foundation of academic integrity.
What we must now adapt to is AI’s inevitable implementation into the peer review process. From reviewer selection and plagiarism detection to content screening and assessment, AI tools are slowly being embedded into editorial workflows.
Roohi Ghosh, Co-Chair of the Peer Review Week Steering Committee 2025, says that it’s no longer a question of whether we should utilize AI during peer review; its adoption is ‘an undeniable reality.’ Instead, we must focus on ‘reimagining peer review to transcend traditional methods, embrace AI’s potential, and ensure that the process retains its essential humanity.’
But how might this integration work for open access books? And what would be the unique pain points involved in using AI to help review long-form academic content?
Content assessment
It is no secret that AI can assess texts at a much faster rate than humans. A 2024 study evaluating the rapidness of AI in producing evidence reviews showed that those assisted by AI were completed in 23% less time than the human review (90 hours vs 118 hours).
AI’s ability to quickly analyse and synthesize information is one of its key strengths. This could save vast amounts of time during the initial stages of the review process, potentially acting as a pre-screening assessment that can be used as a guide by the reviewer. However, with the assessment of academic books, the emphasis isn’t really on simply analysing the information in the quickest amount of time.
Yes, the review needs to assess the academic integrity of the book. But this is a nuanced evaluation, often driven by academic experience and expertise. Additionally, AI tools lack the comprehension and intuition of humans. Machines generalize, whereas humans are able to comprehend something new and unexpected.
On top of that, AI has been shown to struggle with certain factors, such as context sensitivity, bibliometric analysis, deep contextual understanding, and interaction with visual content in the form of graphs and tables during the review of academic books.
In essence, AI doesn’t possess the flexible reasoning and context sensitivity of a human. These abilities, shaped by real-world experiences and abstract conceptualization, are crucial to understanding not just what an academic book is communicating, but what it’s trying to achieve within its subject area as well.
Ultimately, AI should be seen as a complementary tool in the reviewing process for academic books, rather than a standalone alternative.
Selecting reviewers
As well as assessing manuscripts, AI can also assess the suitability of potential academic reviewers.
Natural language processing (NLP) techniques are being used to analyse manuscripts and match them with reviewers. These AI-driven systems do so by identifying reviewers whose research interests closely align with the subject matter of the manuscript. The aim is to ensure that reviewers who are familiar with the research are chosen, thus improving the quality of peer review. This is especially important for books, where discussions are more extensive and thus require a broader understanding of the research’s surrounding academic context.
Machine learning algorithms are also capable of predicting potential reviewer performance based on past review quality, timeliness, and citation impact. This could help editors select appropriate and reliable reviewers, streamlining one of the key stages of the book production process.
Ethical considerations
As with any use of AI, we must consider the ethical implications of introducing it into the peer review process.
One potential issue with AI-assisted peer review is algorithmic bias. Algorithms are trained on preexisting data. However, if there are gender or geographical disparities in this data, then these biases may be further reinforced during reviewer selection.
Another consideration is the uncertainty surrounding AI decision-making processes. It is generally difficult to understand how recommendations are generated due to the ‘black box’ nature of machine learning algorithms. This means that supporting the decisions of AI assessments is not a straightforward consideration.
Collaboration between AI and humans
Despite its issues, AI isn’t going anywhere. It’s only set to become a bigger presence in all areas of society, including academic publishing. It’s no longer a case of deliberation, but integration.
Going forward, we must learn how to best utilize AI without compromising our reliance on human decision-making. Trying to find a balance between these two modes of review will be key. And this will only be achieved through trial and error. What’s important is that we retain our humaneness and best judgment during the integration of both approaches.
Want to learn more about peer review and open access books? See our FAQ on the topic. Additionally, check out our interview with Jordy Findanis and Laura Bandura-Morgan from OAPEN/DOAB, where we discuss how generative AI may be utilized in the future to enhance peer review in open access book publishing.