
I am an AI governance researcher based in London, UK.I'm currently a research scholar at the Centre for the Governance of AI (GovAI). I work on the Risk Management Team, which focuses on developing risk management policies for frontier AI. My current project is on addressing the risks of harmful manipulation from frontier AI.Before joining GovAI, I was an AI Safety Manager at the Frontier Model Forum, where I led the biosecurity and nuclear security work streams. I also contributed to the FMF's work on frontier AI frameworks and the work on cross-firm information-sharing.Before that, I was a senior research associate at the Centre for International Governance Innovation working on policy solutions for global AI risks, a winter research fellow at GovAI focused on international compute governance, and a policy analyst in the Government of Canada working on international AI policy.I received my M.A. in Global Public Policy and International Organizations from the Norman Paterson School of International Affairs in Ottawa, Canada, and my B.A. in honours political science with minors in French and sociology from McGill University in Montreal, Canada.In my personal life, I play lots of different sports, most frequently volleyball and soccer. I sing and play the guitar, and recently started doing improv comedy, which observers have commented that I've taken to like a fish to land.
Zaheed Kara (2026), for the Frontier Model ForumBased on extensive engagement with the nuclear security community, this research update aims to inform greater understanding of those risks and establish a foundation for future collaboration on frontier AI risk management between the frontier AI and nuclear security communities.
Dr. Sarah Case Lackner & Zaheed Kara (2025)Advanced AI models and systems are rapidly changing the landscape of threats. This report considers at a high level how frontier AI could enhance existing adversary capabilities in carrying out threats to nuclear facilities, focusing in particular on three phases of an attack: target selection, attack planning and skill building, and execution.
Dr. Sarah Case Lackner & Zaheed Kara (2025)Aimed at policymakers and diplomats focused on nuclear security, this brief provides a concise introduction to AI models and systems. It discusses the origin and development, beneficial use cases, shortcomings and vulnerabilities, and potential future developments in the field that may have implications for policymakers and diplomats.
Zaheed Kara (2025), for the Frontier Model ForumThis issue brief presents a preliminary taxonomy of safeguards designed to reduce the risk of biological misuse stemming from access to frontier AI models. Drawing from discussions with experts within the Frontier Model Forum (FMF) and the broader biosafety and biosecurity communities, this brief outlines the current landscape of AI-bio misuse safeguards, identifies potential future approaches to mitigations, and underscores the importance of implementing societal-level measures as a complement to technical safeguards.
Zaheed Kara (2025), for the Frontier Model ForumBased on expert discussions among Frontier Model Forum (FMF) member firms and the wider biosafety community, this issue brief highlights emerging industry consensus on a core biosafety threshold for frontier AI. In addition to specifying the point at which an AI model’s capabilities in biological domains may require further assessments and/or enhanced safeguards, it also provides initial guidance for evidence-based threshold determinations.
Zaheed Kara (2025), for the Frontier Model ForumThis issue brief outlines a three-tiered approach to responsible reporting that aims to attend to both the benefits of greater transparency and the potential risks associated with information and attention hazards. Drawn from expert discussions held by the Frontier Model Forum (FMF), the approach reflects preliminary thinking across FMF member firms about what information from safety evaluations should be shared with the public at large, what should be disclosed within trusted expert networks only, and what should be kept private.
Zaheed Kara (2025), for the Frontier Model ForumThis issue brief seeks to advance and inform public understanding of frontier AI thresholds. Drawing on insights from experts within the FMF as well as the broader AI safety and security community, this brief elaborates on the importance of thresholds for frontier AI safety frameworks and outlines the different types of thresholds that have been proposed.
Zaheed Kara (2024), for the Frontier Model ForumThis issue brief offers an initial taxonomy and definitions for frontier AI safety evaluations specific to the biological domain, categorized across two dimensions: methodology and domain. Based on input from FMF member firm experts, in addition to a diverse group of external experts from the advanced AI and biological research fields, this brief aims to document and build a preliminary consensus around the current understanding of frontier AI-bio safety evaluations.
Claire Dennis, Stephen Clare, Rebecca Hawkins, Morgan Simpson, Eva Behrens, Gillian Diebold, Zaheed Kara, Ruofei Wang, Robert Trager, Matthijs Maas, Noam Kolt, Markus Anderljung, Konstantin Pilz, Anka Reuel, Malcolm Murray, Lennart Heim, Marta Ziosi (2024)As AI advances, states increasingly recognise the need for international governance to address shared benefits and challenges. This paper presents a novel framework to identify and prioritise AI governance issues warranting internationalisation.
Zaheed Kara (2024), for the Frontier Model ForumDrawn from the Frontier AI Safety Commitments as well as published member firm frameworks and expert input, this piece outlines a preliminary consensus among member firms about how to structure frontier AI safety frameworks.
Duncan Cass-Beggs, Stephen Clare, Dawn Dimowo, Zaheed Kara (2024)This discussion paper explores three emerging global-scale challenges posed by advanced AI that could require international cooperation: realizing the global benefits of AI, mitigating the global risks and making legitimate choices about future implications of AI for humanity. The paper draws on existing research and policy efforts, as well as valuable discussions and feedback from many quarters, and proposes the development of an international Framework Convention on Global AI Challenges.