Building Resilience to AI’s Disruptions to Emergency Response
False alarms are no fun, particularly in the middle of the night for the third automatic fire alarm for the shift at the same shopping mall or office complex. You know it’s a false alarm—just as the previous two were—but the job demands that you respond. That’s the nature of emergency response or, at least, that’s how things are done today.
What happens when, instead of being plagued by a slew of false alarms on the occasional shift, the false alarms come every shift? What if, instead of 1 in 10 calls being false alarms because of a faulty system, the vast majority of calls were false intentionally, designed to overwhelm emergency response? If there are nine fake calls for a structure fire in a row, should you respond to the 10th call? If that call turns out to be legitimate but you opted for a reduced response, what are legal and ethical implications?
False alarms are a tolerated nuisance, but an emergency response system that’s overwhelmed with AI-generated incidents is a crisis in the making. However, that’s the tip of the iceberg for what AI can do to disrupt emergency response, and we aren’t ready for it.
New era of disruptions
AI is changing our world very quickly. Advanced systems already meet or exceed human performance across various domains. For example, OpenAI’s o1 system, which was released in September 2024, reached expert Ph.D.-level performance on postgraduate-level physics, chemistry and biology questions that were curated for high difficulty. A graph of AI capabilities across various domains looks like a pile of hockey sticks, with rapid improvements fueled by huge investments and no end in sight.
Many experts, including those who are at the forefront of AI development, warn that AI soon could cause serious harm. U.N. Secretary-General António Guterres warned that ungoverned AI is an “area of existential concern.”
We are at the dawn of a new era of disruptions to emergency response.
Real-world implications
A Tesla Cybertruck explosion in Las Vegas in January 2025 is believed to be the first confirmed instance of AI being used to plan an attack. According to police, the suspect used ChatGPT to obtain information about explosives and detonation. As troubling as that is, what’s scarier is considering what criminals will do as AI systems become more capable.
On Jan. 30, 2025, the Center for AI Policy, in conjunction with Fairfax County, VA, gathered public safety officials, federal agencies and AI experts to examine AI-enabled emergency response disruptions and to strategize ways to build resilience against emerging threats. During the event, participants explored scenarios that showcased AI’s capabilities and what’s expected in the near future. The takeaways were sobering.
AI’s current capabilities could disrupt 9-1-1 operations through enhanced cyberattacks, misinformation that undermines the public’s trust in first responders and AI-generated calls that are virtually indistinguishable from legitimate ones. The threat is getting worse with the rapid growth of AI-driven automation and reasoning. Soon, AI systems could coordinate large-scale misinformation campaigns, orchestrate complex prank call, or swatting, attacks, and overwhelm agencies with false emergency reports.
Recommendations and key stakeholders
Building resilience requires interagency collaboration, support from trade associations and traditional private sector partners, and, ultimately, the AI companies themselves. Fire/rescue, law enforcement, EMS, 9-1-1, emergency management and public information officers all will have unique insights—and unique roles—for addressing AI’s threats. Below are a five steps that agencies should be taking.
- Establish verification protocols. Consider this scenario: A deepfake voice clone of the station officer calls the firehouse and instructs crew members to take their unit out of service for a training detail across town. Then, when a first-due box gets dispatched, the unit is unavailable or out of position. AI tools make this possible today, with no technical expertise and at very little cost. Are your personnel primed to recognize and question unusual requests? A simple method to mitigate this risk could be implementing callback verification procedures for any nonstandard orders or resource reallocations.
- Update standard operating procedures. Departments need clear guidelines for handling suspected AI-generated false alarms. For example, if a 9-1-1 center receives dozens of similar calls that report a nonexistent incident, what’s the protocol? When should resources still be dispatched, and what level of response is appropriate? To some extent, 9-1-1 centers and law enforcement agencies already are revising protocols to deal with the increase in swatting incidents, but AI will present a totally different situation for the scale and sophistication of these threats. Furthermore, what works in the law enforcement context might not work for fire/rescue.
- Develop trusted information channels with personnel. Heaven forbid that we deal with another pandemic, but if you thought debates over the efficacy of masks and the safety of vaccines were bad in 2020, just imagine how much more difficult that it would be in a world full of AI-fueled misinformation. Agencies must double down on their efforts to manage different perceptions of fact versus fiction—both for personnel and the public—where necessary for the mission.
- Train on your Mark 1 kits. AI systems can be used to assist in the creation of chemical and biological weapons. They can provide step-by-step instructions for how to produce pathogens or toxins, including which websites sell components, how to convince a company that an individual is an acceptable buyer and how to troubleshoot lab processes. The level of expertise that’s needed to produce these weapons is decreasing as AI capabilities improve. You don’t need to remember the Anthrax scare of 2001 to appreciate the social upheaval and challenges for first responders that a spike in chemical and biological threats will bring.
- Stay up to speed on AI’s benefits and risks. There’s no doubt that AI will improve operations in several ways. Fire department personnel are going to hear about that, particularly from companies that have an AI-enhanced product to sell. However, personnel also must hear about AI capabilities that present risks, such as new deepfake tools and agentic capabilities. Make sure that processes for tracking new threats and for distributing hazard information across the department incorporate the latest news about rapidly evolving AI tools that could affect operations.
Awareness of the risks will help for identifying solutions and getting buy-in. During the Fairfax tabletop, public safety officials identified strategies for mitigating some (but not all) AI risks. The officials’ perception of the risk changed radically. According to a survey of emergency responders that was conducted before the tabletop, 87 percent indicated that they weren’t at all or only slightly concerned about AI disrupting operations. By the event’s conclusion, the overwhelming majority of participants felt significantly more concerned, believing that AI could disrupt emergency response within the next six months.
Role of AI companies and government
AI developers should stress-test their systems before deployment to evaluate the potential risks to emergency response and to implement safety features that reduce AI’s susceptibility to malicious use. They also should conduct research to fully understand how these advanced “black box” systems work. AI can’t be safe if the engineers who are creating it can’t explain its behavior.
The government has a role, too. Federal rules have been proposed to require AI companies to report on their safety testing. Bipartisan legislation was introduced to ensure that AI systems are evaluated particularly for their risk of chemical, biological, radiological and nuclear threats. So far, however, despite broad public support for regulating AI, we lack safety or even transparency requirements for advanced AI development.
What comes next
The AI companies talk a big game about where the technology is headed. That said, no one knows for sure what changes AI will bring in the near future, but big changes are indeed coming.
Fundamentally, resilience to AI’s threats to emergency response requires action by public safety officials. Chiefs, sheriffs, 9-1-1 directors and front-line personnel must adapt to the evolving threat environment. Someday, could that mean personnel won’t respond to a call unless it’s been validated and deemed likely legitimate? There will be difficult questions to answer.
Being prepared doesn’t mean anticipating every scenario. First responders are trained to apply their knowledge and experience in novel situations. However, AI introduces an unprecedented degree of unpredictability, one that evolves faster than traditional threats and has the potential to reshape the entire landscape of emergency response. The question is not whether AI will be used to disrupt our systems but when and how.
The stakes are high, but so is our capacity to adapt. Now is the time to lay the groundwork for AI resilience in emergency response, before false alarms become the least of our worries.

Mark Reddish
Mark Reddish is the director of external affairs at the Center for AI Policy. He also is certified as a master firefighter and EMT.