AI for Today’s Fire Service: What Worries Firefighters & What Fire Chiefs Can Do About It

Kevin Sofen tells why data quality, surveillance and governance are critical for ensuring that adoption is done correctly.
April 8, 2026
8 min read

Key Takeaways

  • If data quality, surveillance and governance aren’t assured, firefighters won’t trust AI’s use within their department.
  • Fire departments that use AI, for example, for records management should audit their data before they deploy AI, to ensure that “messy data” doesn’t result in flawed AI output.
  • Although fire department administrators could seek to employ AI to optimize performance, firefighters might assume that its use is punitive. Administrators must be explicit about the intent of the use of AI data and let crews see what the AI sees (i.e., no hidden dashboards). 

In early 2020, I was helping departments test virtual reality training platforms and drone systems. Then COVID hit, and within three weeks, the entire fire service did things that we never believed would happen: emergency operations centers went virtual overnight; chiefs ran incident command from home; and training moved online. Technology departments had been “considering” these things for years. All of a sudden, they got deployed in days.

We’re in a similar moment with artificial intelligence (AI). The difference? AI isn’t waiting for a crisis. It’s being embedded quietly, one system at a time: FDNY’s traffic modeling in collaboration with New York University; the AI call center in Copenhagen, Denmark; computer-aided dispatch (CAD) systems’ call-routing; records management systems that now are suggesting codes; EMS software that now is prefilling narratives. The fact is that most departments already adopted AI but don’t realize it.

I’ve spent 14 years working with chiefs and firefighters on technology deployments, but I’m not a firefighter. I’m just a guy who departments call to help to implement new technologies, and I keep hearing the same three concerns about AI: data quality, surveillance and governance. If the fire service doesn’t address these concerns now, it’ll end up with technology that technically works but that nobody trusts.

Data-quality problem

AI makes decisions that are based on data, and we all know that fire service data has been historically inconsistent, with incomplete records, miscoded call types, missing timestamps and CAD data that doesn’t sync. The fire service has lived with messy data, because humans work around it.

AI can use bad data, but then the bad outputs are magnified. When AI learns from flawed data, it amplifies flaws with complete confidence. Crews see suggestions that don’t match reality and stop trusting the system.

Take EMS charting, for example. Several vendors market AI tools that auto-generate ePCRs. They promise time savings and huge efficiencies. Recently, I learned about a department where AI pre-checked respiratory distress boxes based solely on dispatch information before anyone was on scene. Medics started to accept those suggestions because it speeded their work. Six months in, the department’s medical director noticed that charting quality declined. The documentation looked complete but drifted from what was actually happening with patients. The AI made paperwork faster but made records less accurate: Garbage in, gospel out.

What should you do? Audit your data before you deploy AI. If your systems are messy, AI makes it worse. Insist on explainable AI. If the system can’t tell you why it made a recommendation, don’t use it. When crews override recommendations, capture why.

Learning from the mistakes is the best way to improve AI.

Then test—actually test—and test again. A fire service software developer who I know drops Easter eggs, or hidden surprises, into his AI outputs to see whether users catch them. Most users don’t notice them. We already are trusting AI outputs without reading them carefully.

Consider how we use GPS today. How many of us blindly follow navigation? That’s where we’re headed with AI, but we can’t trust it fully yet. Five years from now? We might trust it too much. We’ll stop questioning, get complacent and miss critical errors, not because the technology failed, but because we stopped being human in the loop.

Surveillance problem

AI tracks everything: response times, decision patterns, documentation speed, who picks up overtime, who calls in sick. All of it is logged, analyzed and turned into metrics.

There’s a fine line between helping crews to improve and looking for reasons to discipline them. Right now, a lot of firefighters believe that AI is on the wrong side.

I’ve worked with departments on this. Management sees “performance optimization.” Members see Orwellian Big Brother. Usually, the intention is good. The AI vendor promises to identify training gaps and improve efficiency. However, when firefighters believe that AI is watching to catch mistakes, they game the system or avoid it entirely.

I worked with one department that connected cameras and analytics on their rigs for better training content. The department had the best of intentions. Within weeks, crews assumed that every call was being reviewed for discipline, not learning. The technology didn’t create a surveillance culture. Instead, it amplified fears that already were there.

What should you do? Be explicit about what AI data are used for. Training and improvement only? Say so clearly and put it in writing. Have discussions about this at the kitchen table.

Keep AI out of discipline unless there’s a clear policy to which everyone agreed beforehand. You can’t use AI as both coaching and enforcement, so pick one. Give crews control. Let them see what the AI sees with no hidden dashboards.

If firefighters believe that AI is there to catch them messing up, they never will trust it to help them to succeed.

Governance problem

AI adoption is occurring ad hoc with different vendors/platforms and no coherent policy.

I’ve talked with departments where an EMS division used a “free” AI documentation helper for months before the administration knew that it existed. No one reviewed the contract or data storage. This great new AI tech showed up because an early tech adopter thought it was a good idea, which, in theory, it was.

Here is what makes this situation dangerous. If the tool is free, you are the value. Your data train the next version. In some states, every AI input and output is a public record. Free AI tools with agency data? You might not comply with records management laws.

With any free tool, it should be obvious, but I’ll spell it out: no patient data; no victim data; nothing personally identifiable in a free AI tool. Period. Hard stop.

After we navigate “freemium” models and departments start to pay, who decides when tools move from “suggested” to “required”? Who’s liable when AI makes a bad call? What happens when an AI tool auto-fills an EMS narrative and gets it wrong? These aren’t hypothetical scenarios. They’re happening now and require your attention.

What should you do? Create a small, cross-functional AI governance group: chief, union representative, IT, training and operations. Meet quarterly. Vet tools before they go live.

Draw red lines on what AI never will decide alone. Life safety? Deployment? Discipline? Write it down. Require “human in the loop” by default.

Why this matters

AI will change the fire service as much as radios and SCBA did. That said, AI is different. Not because AI replaces firefighters, but because AI changes what firefighters spend their time doing.

COVID forced the fire service’s hand overnight. This AI evolution is different. The fire service has time to get this right if it takes change management seriously.

AI done right disappears. It routes the call, fills in the narratives and handles the scheduling. You stop noticing that it even is there. AI done wrong becomes another war story about the time leadership bought something nobody asked for and wondered why crews fought it.

The departments that succeed won’t be the ones that have the fanciest tools and vendor swag. They’ll be the ones where firefighters actually trust the technology. That trust comes from three things that can be controlled right now: clean data, clear boundaries on surveillance and someone who’s in charge of what AI gets to decide.

The AI train won’t be stopped. The technology is too useful, and the capital markets are too invested. However, departments do get to decide where it belongs, what the rules are, and whether it makes the job better or just busier.

COVID showed that the fire service can move fast if it must. AI is teaching that departments should move deliberately when they can. The choice isn’t whether to adopt; it’s whether the adoption is going to be done correctly. Make it work for the best profession in the world on your terms.

Product Spotlight

Automated Record-Keeping & Reporting

Fireproof Tech’s Guided NERIS Entry App takes the pain out of completing incident reports. Algorithms that are powered by artificial intelligence (AI)populate reports based on members’ narrative. A chatbot can query incidents and provide deeper analysis and insights. An MCP Connector connects incident data to powerful tools, such as ChatGPT and Claude. Customizable validations ensure that departments capture data the way that they desire.

AI-Powered Documentation

First Due AI, which is built into the First Due platform, brings practical, mission-ready intelligence directly into daily workflows. From voice-powered incident documentation and automated QA/QI reviews to smart scheduling and natural-language reporting, First Due AI helps to reduce administrative burden faster with confidence, lower costs and improved outcomes. By automating documentation review across every ePCR, First Due AI improves report accuracy, uncovers protocol gaps and performance trends in real time, increases reimbursement capture and gets units back in service faster.

Sonar System

AquaEye’s AquaEye Pro handheld sonar system quickly locates missing persons underwater. The device sends sonar pulses into the water and uses AI to analyze returning echoes, to Identify objects that match a human body. By rapidly indicating a victim’s direction and depth, AquaEye Pro shortens search times, reduces diver exposure to dangerous water and makes rescue more likely. Paired with Command Hub, search results are overlaid on a map, to enable a fast, coordinated rescue over a larger search area.

About the Author

Kevin Sofen

Kevin Sofen

Kevin Sofen is a practical innovator in emergency response technology that’s dedicated to advancing public safety through solutions in fire, rescue and EMS operations. With 14 years at W.S. Darley & Company, five years at the International Association of Fire Chiefs and direct partnerships with fire tech startups, he brings a unique perspective to the intersection of front-line operations and emerging technology. As founder and host of the “Smart Firefighting” podcast and the driving force behind Technology Summit International and Next Gen Tech Summit, Sofen builds high-impact platforms that connect industry leaders, subject matter experts and front-line responders.

Sign up for our eNewsletters
Get the latest news and updates

Voice Your Opinion!

To join the conversation, and become an exclusive member of Firehouse, create an account today!