Mobile navigation

News 

Frontiers releases AI guidance

Frontiers launches AI practical guidance for researchers, editors, and reviewers, and calls for policy evolution.

Frontiers releases AI guidance
Kamila Markram: “The guidelines launched today are another step in providing a concrete and practical framework that evolves with researcher engagement.”

Frontiers has announced the release of publishing AI guidance covering the entire publication lifecycle – from researchers to editors and peer reviewers – moving beyond simplistic “allowed / not allowed” rules toward practical, responsible routes for AI adoption, while calling for policy to evolve in step with real-world AI use by researchers and reviewers.

Frontiers introduced AI to research integrity checks a decade ago, added the publisher. Kamila Markram, Frontiers’ co-founder and CEO, sees this latest initiative as continuing this principled innovation: “Frontiers was born digital and has always been an AI-native organization, committed to developing and delivering state-of-the-art AI tools and technology that aid researchers at every stage of the publishing process and safeguard quality and integrity in peer-review. We back this ethos with safe and responsible use of AI that listens to community needs and feedback. The guidelines launched today are another step in providing a concrete and practical framework that evolves with researcher engagement.”

The guidance responds to what is already majority practice across the sector, as highlighted in Frontiers’ recent whitepaper, which showed that most peer reviewers now use AI and policy must keep pace. AI is already embedded across publication stages and this requires structured, transparent governance rather than ad hoc controls, the publisher continued. Elena Vicario, Frontiers’ Director of Research Integrity, commented on the guidance launch: “AI use in research and science publishing is already here and provides an unparalleled opportunity to advance scientific discovery and innovation. The publishing industry should not present roadblocks to AI adoption but roadmaps that provide confidence and protect integrity for researchers, editors and reviewers alike when using AI throughout the publishing journey. This is why Frontiers produced this guidance and we are proud to have taken this first step in progressing policy around AI use in research publishing.”

This is the first framework to provide clear, operational routes forward for AI use in every publishing role (whether researcher, editor, or reviewer), promoting AI use that is accountable, transparent, risk-aware, and innovation-enabling, added Frontiers.

Rather than roadblocking AI, Frontiers retains the principle that the human remains accountable and translates it into responsible practice through the BE WISE framework:

  • B — Be transparent
  • E — Ensure accountability
  • W — Work with the rights tools
  • I — Inform yourself
  • S — Safeguard integrity
  • E — Embed equity

Frontiers explains that taken together, BE WISE principles provide a structured way forward — enabling innovation while protecting research integrity.

The guidance introduces structured “permission-to-proceed” checkpoints across all roles, operationalizing the BE WISE framework. Researchers, editors, and reviewers are advised to use AI only if they can answer yes to four core checks at every key point:

  • Impact and oversight
  • Policies and governance
  • Permitted inputs
  • Verification

If not, AI use should remain limited to low-impact tasks or not be used.

Ready-to-use tools and prompts that make best practice easy

The guidance also uniquely provides practical tested, ready-to-use prompts and templates, including:

  • Governance checks
  • Audit logs
  • Reproducibility prompts
  • Stage-specific workflows

These tools allow researchers and editors to embed strong, tried-and-tested best practice into daily workflows, making responsible AI use practical, not theoretical, says Frontiers.

Frontiers’ AI guidance is designed as a living framework, not a static document. It provides a responsible structure to advance AI across publishing and is intended to evolve through active community feedback.

Frontiers invites researchers, publishers, and industry bodies to engage in shaping practical policies that enable – not hinder – AI use in research publishing, ensuring innovation strengthens trust, transparency, and integrity across the scientific record.


Keep up-to-date with publishing news: sign up here for InPubWeekly, our free weekly e-newsletter.