Journal article icon

Journal article

Large language model enhanced framework for systematic reviews and meta-analyses

Abstract:
Objective: To evaluate and synthesise current applications of large language models (LLMs) in systematic reviews and meta-analyses (SRMAs), identify key limitations and propose an enhanced theoretical framework to improve the efficiency, scalability and reliability of evidence synthesis. Methods and analysis: We conducted a narrative review of recent studies applying LLMs across key SRMA stages. A total of 21 publications were analysed for model type, task application, accuracy metrics and workflow impact. Building on this evidence base, we designed a comprehensive LLM-enhanced SRMA framework that categorises LLM roles as consultants and assistants, integrates human-in-the-loop strategies and uses retrieval-augmented generation (RAG) and agent-based architectures to address critical challenges including hallucinations, bias and workflow inefficiency. Results: The reviewed literature demonstrated that LLMs can support various SRMA tasks with reported accuracy ranging from 61% to 99%, showing particular promise in literature screening and data extraction. Our proposed framework conceptualises modular integration of LLMs across all six SRMA stages, with LLMs serving as consultants for research question formulation and search strategy development and as assistants for task automation including abstract screening and structured data extraction. The framework incorporates RAG technology to reduce hallucinations by grounding outputs in retrieved literature and employs agent-based orchestration for complex analytical workflows. Theoretical analysis suggests potential for significant efficiency gains while maintaining methodological rigour through strategic human oversight. Conclusion: LLMs offer substantial theoretical potential to transform evidence synthesis by improving efficiency, scalability and consistency across SRMA workflows. The proposed LLM-enhanced framework provides a systematic, theoretically grounded approach for integrating advanced artificial intelligence capabilities into existing SRMA methodologies while preserving essential human oversight and analytical integrity. Future empirical studies are needed to validate the framework’s practical effectiveness, establish implementation protocols and demonstrate real-world benefits in evidence-based medicine.
Publication status:
Published
Peer review status:
Peer reviewed

Actions


Access Document


Files:
Publisher copy:
10.1136/bmjdhai-2025-000017

Authors


More by this author
Institution:
University of Oxford
Role:
Author
More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Sub department:
Engineering Science
Role:
Author


Publisher:
BMJ Publishing Group
Journal:
BMJ Digital Health & AI More from this journal
Volume:
1
Issue:
1
Article number:
bmjdhai-2025-000017
Publication date:
2025-10-08
Acceptance date:
2025-08-06
DOI:
EISSN:
3049575X


Language:
English
Keywords:
Source identifiers:
3366382
Deposit date:
2025-10-13
This ORA record was generated from metadata provided by an external service. It has not been edited by the ORA Team.

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP