A Joint Publication by Cochrane, the Campbell Collaboration, JBI, and the Collaboration for Environmental Evidence
Published: November 2025
Resource Type: Editorial | Cochrane Database of Systematic Reviews
Overview
PACKS Africa is pleased to share this important 2025 position statement outlining how leading global evidence organisations are approaching the responsible use of artificial intelligence (AI) in evidence synthesis. As the role of AI continues to expand across research and decision-making, this joint editorial provides clear guidance on ensuring that automation strengthens—rather than undermines—the integrity of evidence.
The statement emphasises accountability, transparency, ethical responsibility, and the need for continuous human oversight when integrating AI tools into systematic reviews and other forms of evidence synthesis.
Why This Resource Matters
This publication is a valuable guide for researchers, evidence practitioners, and institutions across Africa and beyond. It helps deepen understanding of:
- How AI can be used responsibly in systematic reviews
- Risks and limitations associated with current AI systems
- Reporting expectations when using AI tools in evidence production
- Standards recommended by the RAISE framework (Responsible use of AI in evidence Synthesis)
- The role of human judgment in maintaining rigour and credibility
With the growing interest in applying AI to support evidence-informed policymaking, this resource provides clarity on what good practice looks like.
Key Highlights
- Evidence synthesists remain fully responsible for the quality and integrity of their work, regardless of AI use.
- AI can support efficiency, but its outputs must always be validated by humans.
- Transparency in reporting AI use is essential for reproducibility and trust.
- Tool developers are encouraged to provide clearer information on performance, limitations, and potential biases.
- The position statement aligns with the evolving RAISE recommendations and will be updated as the field progresses.

