Advancements in artificial intelligence (AI) present both significant opportunities and potential risks for global health and safety. To address these challenges, the RAND Corporation and the Nuclear Threat Initiative co-hosted a workshop on February 11, 2025, alongside the AI Action Summit in Paris. The event, titled “AI Safety Institutes and AIxBio Governance: A Discussion with Biosecurity Experts and AI Model Developers,” gathered policymakers, technical experts, and stakeholders from various sectors to explore the intersection of AI and biosecurity.
Participants engaged in discussions aimed at identifying emerging challenges and formulating actionable solutions. A key theme was the need for a shift from broad hypothetical scenarios to specific, measurable indicators to effectively manage risks associated with AI in biological contexts. Experts recommended implementing an “if–then” hazard-threshold framework, which would allow for proactive mitigation strategies even when complete risk quantification is not feasible.
Strategies for Risk Assessment and Policy Adaptation
A significant focus of the workshop was the development of collaborative methodologies for risk assessment. Experts advocated for creating safe proxy environments to evaluate potential misuse of AI tools, particularly in scenarios that could affect healthcare delivery or public infrastructure. This approach would enable thorough testing of both general and specialized AI applications against misuse pathways.
In addition to risk assessment, participants emphasized the necessity for adaptive governance systems that can evolve alongside technological advancements. Effective policies should incorporate feedback from researchers, policymakers, and civil society, allowing for continuous updates to safeguards as the landscape of risks changes.
Encouraging Collaboration and Innovation
Improving communication and fostering information-sharing between the AI and biosecurity communities emerged as another critical point of discussion. Participants agreed that enhancing privacy-preserving information sharing could facilitate effective early warning systems and responses to potential threats.
To support the advancement of safe AI practices, the group discussed the importance of co-developing technical solutions, including access controls, secure computing environments, and auditing mechanisms. These innovations are essential for reinforcing biosafety norms in the face of rapidly evolving technology.
The partnership between RAND and NTI was highlighted as a valuable initiative in bridging diverse perspectives in this critical area. Participants called for ongoing collaboration across sectors to maintain global attention on AI-related biosecurity risks. The insights gathered during this workshop underline the pressing need for comprehensive strategies that can effectively manage the dual-edged nature of AI in the context of public health and safety.
