Large language models and other artificial-intelligence systems could be excellent at synthesizing scientific evidence for policymakers — but only with appropriate safeguards and humans in the loop.
Recent advances in artificial intelligence (AI) have stoked febrile commentary around large language models (LLMs), such as ChatGPT and others, that can generate text in response to typed prompts. Although these tools can benefit research1, there are widespread concerns about the technology — from loss of jobs and the effects of over-reliance on AI assistance, to AI-generated disinformation undermining democracies.
Less discussed is how such technologies might be used constructively, to create tools that sift and summarize scientific evidence for policymaking. Across the world, science advisers act as knowledge brokers providing presidents, prime ministers, civil servants and politicians with up-to-date information on how science and technology intersects with societal issues.