Engineer in safety vest next to AI graphic

Five Pitfalls That Can Derail AI-Powered Safety Programs

Computer vision and AI safety systems promise real-time hazard detection, but organizations must avoid common implementation pitfalls related to culture, worker trust, privacy and cross-functional collaboration.

The promise is compelling, and for some, maybe even a little scary: artificial intelligence (AI) that can monitor your facility 24/7, identify unsafe behaviors and conditions in real-time to help prevent injuries before they happen. Computer Vision (CV), a type of AI that enables computers to interpret, analyze, predict and extract meaningful information from digital images, videos and visual inputs, represents one of the most significant advances in workplace safety technology in decades.

Early adopters are reporting dramatic reductions in recordable injuries and near-misses, supported not only by improvements in lagging indicators like TRIR, but also by measurable shifts in leading indicators such as safer behaviors, hazard corrections and supervisor engagement.

Yet for every success story, there are cautionary tales of implementations that never gained traction. The technology works, but the deployment fails. Fortunately, through extensive experience supporting industrial facilities implementing AI-powered safety systems, I’ve observed that most failures follow predictable patterns.

The good news? These pitfalls are entirely preventable - if you know how to spot them early and steer clear before they derail your progress.

Pitfall 1: Implementing AI Without Engaging Frontline Workers

The Problem: The biggest implementation issue I've seen is treating AI deployments as a top-down technology project rather than a cultural initiative. Because CV systems can be deployed quickly using existing camera infrastructure, organizations may rush to implementation without fully considering the worker. The result is predictable: disengaged employees, supervisors unprepared to translate insights into action and ultimately poor data quality.

Think about it purely from the worker's perspective: A new system appears that monitors their movements, generates alerts about their behavior and creates records of their actions. Without proper context and involvement, this may feel like surveillance, not safety.

The Solution: Successful implementations involve workers from day one. This means transparent communication about what the system does and doesn't do, clear expectations for how data will be used (perhaps most importantly, how it won't be used), engagement with safety committees and employee resource groups, explicit sharing of how the initiative benefits workers and ongoing opportunities for feedback.

When workers understand that AI can't and won't be used for individual performance surveillance, and that the focus is on improving processes rather than evaluating individuals, fear transforms into appreciation for enhanced continuous improvement capability.

Pitfall 2: Overlooking AI's Cultural Impact

The Problem: Organizations may believe AI alone will solve safety challenges. They deploy the technology, see some initial results, then watch engagement decline as workers realize the system is generating more data without translating into meaningful organizational change.

Even sophisticated AI can't build a safety culture on its own. It operates in isolation, doesn't collaborate and can't reflect on what the data means for your specific operation. This is the trap: thinking technology replaces the human work of culture-building rather than amplifying it.

The Solution: Embed AI into your existing culture and continuous improvement processes from day one, so insights translate into sustained action. Establish meaningful key performance indicators that drive the leading actions you want to see sustained. Measure and communicate successes regularly. Discuss opportunities for improvement openly, including frontline team members in these conversations. Create regular touchpoints for feedback and recalibration.

Most importantly, engage workers continuously, not just at launch. Make reviewing AI insights part of daily workflows - a safety walk followed by a review of the data, with at least one action item assigned and resolved each day, is a great way to sustain momentum. Finally, when a goal is achieved, be sure to celebrate the win as a team.

Pitfall 3: Using Data to Blame Rather Than Improve Systems and Processes

The Problem: Here's a fundamental truth that safety thought leader Todd Conklin articulates well: "Error is real and normal and we should not design work systems that count on workers not making errors and mistakes, that count on workers being perfect."

When organizations get access to comprehensive AI-generated data about unsafe behaviors, the temptation is strong to use it punitively. This is treating the symptom, not the cause. When we lose sight of the fact that workers often reveal system vulnerabilities rather than create them, we shift into blame—and culture deteriorates through fear, disengagement and resentment.

The Solution: Use data to improve systems, not punish individuals. This requires training leaders in coaching and root cause analysis as opposed to a punishment mindset. When the data identifies that workers are repeatedly bending awkwardly to handle materials, the question shouldn't be, "How do we get workers to bend correctly?" It should be, "Why is our process requiring workers to bend this way in the first place?"

Focus on process improvements first, behavioral coaching second. Establish consistent feedback loops to track whether your interventions are actually working. When workers see that identified risks lead to better facility layouts, improved equipment access or streamlined processes, they recognize the technology as an ally in making their jobs safer and easier.

Pitfall 4: Operating in Silos Rather Than Cross-Functionally

The Problem: AI safety implementations often start in the environmental health and safety department and stay there. But workplace safety intersects with operations, human resources, risk management, facilities, continuous improvement, information technology and more. When these functions don't collaborate, you get fragmented effort, missed opportunities and diminished outcomes.

Siloed safety implementations struggle with conflicting priorities between departments, duplicated efforts and wasted resources, lack of executive-level visibility into results and difficulty scaling beyond a pilot. Operations sees safety initiatives as slowing production. IT sees them as security risks. HR sees them as employee relations issues. Nobody sees the complete picture.

The Solution: Form a cross-functional project team from the start. Include representatives from safety, operations, continuous improvement, HR, IT and frontline leadership. Establish shared success criteria that matter to all stakeholders - not just injury rates, but also operational efficiency, employee retention, total cost of risk, quality and productivity metrics.

Create regular forums where different functions can share insights from the leading indicator data and collaborate. Operations might spot efficiency opportunities in the same data that safety uses to identify risks. HR might see patterns related to training effectiveness. When departments collaborate around shared data, AI becomes a tool for organizational improvement rather than just safety compliance.

Pitfall 5: Ignoring Privacy and Data Security from the Start

The Problem: Privacy information security concerns aren't an afterthought to be addressed if workers complain or when the IT team is needed to approve a project. They're fundamental to whether your AI safety implementation will succeed or fail. Organizations that don't proactively address privacy and data security face regulatory issues, union grievances and worker resistance that can derail even technically sound deployments.

Privacy problems manifest as worker complaints and union grievances, regulatory scrutiny or violations, inability to scale across regions with different privacy laws and erosion of trust that damages other safety initiatives. Once workers believe their privacy has been violated, rebuilding trust becomes exponentially harder. Likewise, a failure to involve IT to address information security concerns early in the project can create distrust and delay or completely stall an implementation.

The Solution: Build worker privacy protections into your AI system architecture from day one, not as optional features added later. This means worker anonymization should be technically enforced, making individual identification only possible for those who truly require it. Data governance policies should clearly define what information is collected, how long it's retained, who can access it and how it will (and won't) be used. Identify and involve stakeholders from your IT team in the earliest stages of the project to ensure a smooth integration.

Communicate these protections transparently and repeatedly. Engage with your IT team and union representatives and employee advocacy groups early to address concerns before they become barriers. Ensure compliance with relevant regulations like GDPR and jurisdiction-specific privacy laws. Make privacy protection visible in how you talk about and use the system.

Making AI Safety Work

By providing continuous workplace visibility, CV AI has enormous potential to prevent injuries and save lives. The technology works, but technology alone doesn't create a safety culture or drive organizational change. That requires thoughtful implementation that engages people, respects privacy, builds trust and focuses on systems rather than blame.

The organizations seeing the most dramatic results from AI safety implementations aren't necessarily those with the most sophisticated programs or cutting-edge technology. They're the ones that avoided these five pitfalls by treating an AI deployment as a change management initiative that happens to involve advanced technology, rather than a technology project that happens to affect people.

This article originally appeared in the April/May 2026 issue of Occupational Health & Safety.

Featured

Artificial Intelligence