top of page
image_edited.jpg

Wellstone Center for
AI & Relational Intelligence

Advancing AI through attachment science,

ethical design, strategic innovation, and human-alignment.

Search

Translating Public Health Frameworks to Guide Governance in Frontier AI

Updated: Aug 20, 2025


WHITE PAPER: Translating Public Health Frameworks to Guide Governance in Frontier AI


ABSTRACT


Public health experts have long noted that threats to population health can escalate quickly and unpredictably. Response frameworks have been built to address these threats, which are best mitigated by proactive, coordinated strategies that enable rapid identification and containment (Fineberg, 2014). Targeted responses are often difficult to implement, however, because many emerging risks remain undetected until the adverse effects become widespread. This complexity has been well managed in public health through structured, multifaceted frameworks, many of which can offer valuable insights to the field of frontier AI governance, where globalization and the pace of change pose comparable challenges (Anderljung et al., 2023). Drawing on lessons from infectious disease surveillance and well-established public health frameworks, this paper proposes a population-level AI risk governance model to close the pacing gap between AI innovation and governance capacity. As the frontier AI landscape evolves, building a robust international governance infrastructure will be essential to ensuring that protective measures scale at the rate of evolving threats.


1. Introduction


Technology is advancing at an exponential pace, and governance systems are scrambling to keep up. Particular challenges are arising in the domain of frontier AI, which is comprised of the highly capable general-purpose models that are being developed at the forefront of technological capability (Anderljung et al., 2023). Governance challenges in frontier AI closely parallel those encountered by policymakers in public health. Emerging threats are unpredictable and nonlinear, often going undetected until there are significant adverse effects (Fineberg, 2014; Gostin et al., 2020). Public health leaders have worked to address these challenges through layered prevention initiatives, rapid response protocols, equity in tool and resource distribution, and an integrated system for monitoring and surveillance (CDC, 2018; Fineberg, 2014). These strategies are designed to identify and contain threats before they escalate into full public health crises.


The governance of frontier AI could benefit from a similar anticipatory posture. As in public health, risk management in frontier AI must be grounded in early warning systems that identify hazardous capabilities before deployment, coupled with safeguards and capacity-building measures that establish protective infrastructure across jurisdictions. This paper adapts key elements from public health frameworks to create a population-level AI risk governance model built around four primary pillars: anticipatory surveillance, layered mitigation, equitable access to safety infrastructure, and plans for rapid response. While elements of these approaches are emerging to varying degrees in governance systems around the globe, the concepts have yet to be synthesized into a unified, structured model. Use of the public health lens provides a shared vocabulary to discuss frontier AI governance concerns, facilitating increased coordination and scalability across jurisdictions. Such alignment is critical in developing a global governance framework that can keep pace with technological advancements.


2. Lessons from Public Health Risk Management


Governance challenges in frontier AI and public health share several defining features: they are systemic, affecting many interconnected domains; emergent, with threats that shift in form and scale over time; and only partially observable, with some risks remaining hidden until they manifest at scale. This section examines how the primary lessons from public health risk management shape the pillars of the proposed model and can be operationally applied to frontier AI governance.


2.1 Surveillance and Early Warning Systems


The use of surveillance and early warning systems is a key facet of public health management. The World Health Organization (WHO) maintains a highly effective surveillance infrastructure to monitor influenza patterns and coordinate countermeasures in real time, leading to the development of vaccinations that save tens of thousands of lives annually (WHO, 2019; Iuliano et al., 2018). The system requires interdisciplinary collaboration to coordinate distributed sensing, standardized reporting, and data-sharing. A comparable surveillance system in frontier AI could continuously monitor emerging model capabilities, deployment patterns, and indicators of potential misuse across sectors. An effective infrastructure would require standardized protocols for capability and safety evaluation, mandatory reporting of frontier model training and significant deployments, and secure channels for sharing risk-relevant information between private labs, regulatory enforcement authorities, and multilateral bodies to ensure a coordinated surveillance and response. This type of infrastructure would function as an early-warning network for AI, providing the system-level situational awareness needed to address cascading harms before they snowball. Some of this foundational work is already underway through organizations such as the UK AI Safety Institute and the European Commission’s AI Office, but realizing its full potential will require ensuring interoperability, achieving broad adoption, and sustaining cross-sector and international cooperation.


2.2 Layered Risk Mitigation


Effective interventions in public health are rarely singular. Instead, they operate through layered systems designed to reduce the probability of harm, detect it early if it occurs, and limit its spread or severity once it is identified. These layers of risk mitigation are typically categorized as primary prevention, eliminating or reducing risk factors, secondary prevention, detecting and addressing early signs of harm, and tertiary prevention, minimizing impact after harm has occurred (Gordis, 2014). For example, in HIV prevention, public health agencies combine population-wide education campaigns, early diagnostic testing, and antiretroviral treatment programs to reduce transmission and mortality (UNAIDS, 2022). The redundancy created by this layering ensures that when one protective measure does not capture all potential risks, others are positioned to intercept remaining threats, creating overlapping safeguards that limit the spread and impact of harm.


Layered mitigation is central to effective public health campaigns, and it would provide a critical interwoven safety net in the realm of frontier AI. Primary prevention might involve embedding safety and alignment constraints at the model development stage, including adversarial testing, interpretability work, and targeted capability red-teaming to reduce the emergence of hazardous capabilities. Secondary prevention could deploy continuous monitoring and capability evaluation during model use to detect early indicators of misuse or dangerous emergent behavior. Tertiary prevention would include contingency mechanisms, such as model rollback, restricted access, or coordinated incident response, to contain and neutralize any risks once identified. Such layered governance reduces reliance on any single point of defense, making the overall mitigation system more resilient. Elements of this approach are beginning to appear in AI safety practice, including pre-deployment evaluations and post-deployment monitoring frameworks under development by the UK AI Safety Institute and the U.S. AI Safety Institute Consortium. However, these measures remain unevenly implemented across jurisdictions. Embedding multi-layered safeguards into coordinated international governance frameworks would strengthen the global capacity to anticipate, contain, and moderate risks as frontier AI advances.


2.3 Equitable Access to Protective Measures


In public health, the effectiveness of threat management depends not only on the strength of preventive measures but on their equitable distribution. Outbreak control is undermined when vaccines, diagnostics, or treatments are concentrated in highly-resourced regions while other swaths of the global population remain vulnerable. For example, during the COVID-19 pandemic, unequal access to testing and vaccination contributed to prolonged transmission and increased global mortality (Usher, 2021). Public health frameworks address this challenge through capacity-building programs, supply chain coordination, and international funding mechanisms to ensure that protective measures reach all regions, not just those with advanced infrastructure.


Safety infrastructure in the frontier AI might include model evaluation tools, interpretability methods, and misuse detection systems. Many countries leading the pack in frontier AI are already moving toward enacting such structures, but the systems will not be effective in a silo. It is critical that all are be accessible beyond the small set of countries and industries developing frontier models. Equitable access gives lower-resourced jurisdictions the resources to identify or respond to dangerous capabilities, circumventing weak points in the global safety net. Practical steps to increase accessibility could include the production of open-access safety toolkits, shared evaluation benchmarks, and readily available training programs for regulators and researchers. Initiatives such as the OECD AI Policy Observatory and the Global Partnership on AI have taken steps toward such knowledge-sharing, but broader coordination and funding will be necessary to close capability gaps and ensure that governance measures consider all populations to protect the whole.


2.4 Rapid Response and Containment


Infectious disease control in public health relies on rapid, coordinated action once a credible threat is detected. Mechanisms such as the International Health Regulations (IHR) require countries to report certain outbreaks and enable the World Health Organization to declare a Public Health Emergency of International Concern (PHEIC), triggering global coordination for containment (World Health Organization, 2016). These systems function best when response protocols are predefined, roles are clear, and resources can be mobilized rapidly, since the speed of action often determines whether an emerging threat is contained or escalates into a crisis. Formalized rapid response systems could serve a similar function in frontier AI, facilitating immediate action when hazardous capabilities or significant misuse risks are identified. This could involve elements like predefined incident classification levels, cross-border notification protocols, and clear authority for initiating containment measures, including model suspension, output restrictions, and broader moratoriums. Existing initiatives, including the incident-reporting working groups of the U.S. AI Safety Institute Consortium and the UK’s exploratory efforts on AI emergency preparedness exercises, reflect an emerging recognition of this need. However, without internationally agreed-upon triggers, information-sharing pathways, and enforcement authority, response efforts risk remaining delayed and fragmentated. Embedding rapid response protocols into global governance frameworks would help ensure that interventions can be deployed at the pace required to contain high-consequence incidents in frontier AI.


3. Challenges and Integration Pathways in Frontier AI Governance


In the prior section, we discussed four elements adapted from established public health frameworks to inform governance in frontier AI: anticipatory surveillance, layered mitigation, equitable access, and rapid response. These elements function most effectively when implemented jointly as part of an integrated architecture in which each component strengthens the others, creating a dynamic and resilient system. The following discussion examines each element in turn, outlining the key implementation challenges in the AI domain and potential strategies for overcoming them.


In translating anticipatory surveillance into the governance of frontier AI, the primary obstacles lie in the absence of standardized capability evaluation protocols, the reluctance of private developers to share sensitive and proprietary information, and the reality that model development cycles move faster than most evaluation processes. Overcoming these barriers would require internationally recognized standards, formalized channels for sharing risk-relevant data, and incentive structures that encourage stakeholder engagement. This incentive structure may include increased liability protections for cooperative actors and procurement preferences, where governments and large institutions prioritize contracts with organizations that meet established safety and compliance standards, creating a market advantage through cooperation. By directly linking adherence to higher safety standards with tangible economic benefits, such preferences can accelerate adoption and promote alignment across sectors.

Layered mitigation faces its own constraints, particularly in the concentration of resources needed for adversarial testing, interpretability research, and real-time monitoring within a small number of well-funded institutions. There is also a persistent risk that political or budgetary pressures could lead to over-reliance on a single defensive measure. Embedding redundancy into licensing or deployment requirements, funding open-source safety tooling, and conducting regular resilience audits to ensure multiple safeguards remain functional can help address these weaknesses. Ensuring equitable access to safety infrastructure is complicated by the fact that safety tools, expertise, and compute capacity are disproportionately concentrated in a handful of select countries. In some cases, geopolitical considerations may limit the willingness of industry leaders to share certain information or infrastructure. Addressing these disparities will require dedicated funding for global safety infrastructure, the development of modular safety tools that can be shared without creating dual-use risks, and partnerships with established global health and development organizations to build AI safety capacity in lower-resourced regions. Rapid response mechanisms in frontier AI could be potentially hindered by the lack of internationally agreed triggers for intervention, the absence of clear cross-border enforcement authority, and the risk of political delays in activating containment measures. Drawing from the International Health Regulations model, AI governance could benefit from predefined emergency protocols, pre-designated response teams, and routine simulation exercises to test coordination before real incidents occur (WHO, 2016).


By anticipating these challenges and embedding solutions into governance design from the outset, the global community will be in a stronger position to move from conceptual frameworks to operational governance systems capable of scaling with the speed and complexity of frontier AI.


4. Conclusion

The lessons drawn from established frameworks in public health offer a key roadmap for governance in frontier AI. Anticipatory surveillance, layered mitigation, equitable access, and rapid response are mutually reinforcing components that create a governance architecture capable of scaling at the pace of technological advancement. Historical outcomes in public health demonstrate that when protective measures are implemented in coordinated, multi-level interventions, it can significantly reduce the probability, scale, and duration of human harm (Fineberg, 2014; Gostin et al., 2020). Embedding these elements into frontier AI governance from the outset would strengthen global resilience, enable faster and more decisive action in the face of emerging risks, and help ensure that technological progress advances in alignment with the public good. Further research should examine the operational feasibility of these measures in the AI context, test their effectiveness through simulation and scenario analysis, and identify the institutional structures best positioned to synchronize them at a global scale.


References


Anderljung, M., Smith, T., Leung, J., Russell, B., Hennigan, T., Bengtsson, L., Cho, K.,

Hobbhahn, M., Korinek, A., Leike, J., McAleese, N., Sastry, G. and Shevlane, T., 2023. Frontier AI regulation: Managing emerging risks to public safety. Centre for the Governance of AI. Available at: https://click.endnote.com/viewer?doi=10.48550%2 Farxiv.2307.03718&token=WzMzMzE2NDYsIjEwLjQ4NTUwL2FyeGl2LjIzMDcuMDM3MTgiXQ.DiiiAS83Hf2AHRoELkZZYrKVf2A

Centers for Disease Control and Prevention (CDC), 2018. Public health surveillance and data.

Fineberg, H.V., 2014. Pandemic preparedness and response — lessons from the H1N1 influenza

of 2009. New England Journal of Medicine, 370(14), pp.1335–1342. Available at https://www.nejm.org/doi/full/10.1056/NEJMra1208802

Gostin, L.O., Friedman, E.A. and Wetter, S.A., 2020. Responding to COVID-19: How to

navigate a public health emergency legally and ethically. Hastings Center Report, 50(2), pp.8–12. Available at https://pubmed.ncbi.nlm.nih.gov/32219845/

Gordis, L., 2014. Epidemiology. 5th ed. Philadelphia: Elsevier/Saunders.

Iuliano, A.D., Roguski, K.M., Chang, H.H., Muscatello, D.J., Palekar, R., Tempia, S., Cohen, C.,

Gran, J.M., Schanzer, D., Cowling, B.J., Wu, P., Kyncl, J., Ang, L.W., Park, M., Redlberger-Fritz, M., Yu, H., Espenhain, L., Krishnan, A., Emukule, G., van Asten, L., Pereira da Silva, S., Aungkulanon, S., Buchholz, U., Widdowson, M.A. and Bresee, J.S., 2018. Estimates of global seasonal influenza-associated respiratory mortality: a modelling study. The Lancet, 391(10127), pp.1285–1300. Available at https://pubmed.ncbi.nlm.nih.gov/29248255/

UNAIDS, 2022. Global AIDS update 2022: In danger. Geneva: UNAIDS. Available at:

Usher, A.D., 2021. COVID-19 vaccines for all? The Lancet, 397(10268), pp.1733–1734.

World Health Organization (WHO), 2016. International Health Regulations (2005). 3rd ed.

World Health Organization (WHO), 2019. Vaccines against influenza WHO position paper –

November 2012. Weekly Epidemiological Record, 87(47), pp.461–476.

World Health Organization (WHO), 2023. Global influenza surveillance and response system

 
 
 

Comments


bottom of page