Engineer using stylus pen on virtual AI screen overlay

AI and the New Frontier of Cognitive Safety

Beyond physical hazards, occupational health and safety must now address "cognitive surrender"—the hidden erosion of human judgment in the age of AI.

Occupational health and safety (OHS) has traditionally focused on visible risks: physical hazards, chemical exposure, ergonomic strain and environmental conditions.

Regulatory frameworks, training systems and workplace protocols have evolved to mitigate these risks with increasing precision. In Canada, responsibility for OHS is shared across federal and provincial systems, with agencies coordinating hazard identification, employee health assessment and workplace safety standards.

But a new category of risk is emerging—less visible, less regulated and potentially more pervasive: the erosion of human cognitive capability under conditions of artificial intelligence (AI) assistance.

This is not a future concern. It is already measurable.

For decades, OHS has operated under a clear assumption: that human judgment is the final safeguard in any system. Even in highly automated environments, workers are expected to monitor outputs, verify decisions and intervene when necessary.

That assumption is now under pressure.

Recent research from the Wharton School indicates that a significant proportion of professionals—over 70% in some conditions—accept AI-generated outputs without sufficient verification, even when those outputs are incorrect. This effect intensifies under time pressure.

At the same time, emerging academic work involving Carnegie Mellon University, University of Oxford, Massachusetts Institute of Technology and University of California Los Angeles shows that even short exposure to AI assistance—10 to 15 minutes—can reduce persistence and degrade independent problem-solving ability.

Taken together, these findings point to a new workplace vulnerability: cognitive dependency.

Traditional workplace hazards are external. They can be measured, monitored and mitigated through engineering controls, protective equipment and procedural systems.

Cognitive hazards, by contrast, are internal. They operate within the worker’s decision-making processes: reduced critical evaluation, over-reliance on automated outputs, diminished attention and vigilance and loss of confidence in independent judgment.

These effects are subtle. They do not produce immediate injury. But over time, they can weaken the very capacities that OHS systems depend on: awareness, discernment and responsible action.

In safety-critical environments—healthcare, energy transportation and manufacturing—the implications are significant.

The term “cognitive surrender” has begun to describe this phenomenon: the gradual relinquishing of human judgment in favor of machine-generated outputs.

Unlike earlier forms of automation, AI systems generate responses that appear coherent, confident and contextually appropriate—even when incorrect.

Workers are not simply using tools. They are interpreting outputs that simulate expertise.

The risk is not that AI replaces human decision-making entirely. The risk is that it subtly reduces the need to think.

OHS has long recognized that human factors—fatigue, stress, distraction—contribute to workplace incidents. Frameworks exist to address these risks.

AI introduces a different dynamic. It reduces perceived effort, lowers vigilance, increases confidence without improving accuracy and shifts responsibility in ways that make accountability less clear.

This creates a gap between perceived safety and actual safety.

From an OHS perspective, this is critical. Systems that appear efficient may be quietly degrading the human capabilities required to manage risk effectively.

In 2002, baseball underwent a transformation known as “Moneyball,” when teams began measuring overlooked indicators of performance and discovered that traditional metrics had been misjudging value.

A similar shift may now be required in workplace safety.

Organizations are investing heavily in AI systems, automation infrastructure and digital workflows. Yet one critical variable remains largely invisible: human capability under AI conditions.

OHS may need to identify and track new indicators of cognitive health and agency.

Emerging approaches suggest several possibilities: verification behaviour (whether workers check AI outputs), decision latency (whether decisions are faster but less rigorous), persistence (whether individuals continue problem-solving without AI) and confidence calibration (whether confidence aligns with accuracy).

These are not traditional OHS metrics, but they may soon become essential.

They point toward a new layer of safety governance—one that complements physical and procedural systems.

If cognitive capability is a risk factor, it must become a training priority.

Current workplace training emphasizes compliance, technical skills and process adherence. Less attention is given to critical thinking under AI conditions, awareness of over-reliance, maintaining independent judgment and reflective decision-making.

This is not simply digital literacy. It is cognitive resilience.

Workers must understand not only how to use AI tools, but how those tools shape their thinking.

Technology does not operate in isolation. Organizational culture determines how it is used.

If speed is prioritized above all else, workers may feel pressure to accept AI outputs without question. If accountability is unclear, responsibility becomes diffuse.

Organizations that emphasize verification, encourage questioning and maintain clear responsibility structures are better positioned to mitigate these risks.

In this sense, OHS intersects with leadership and organizational design.

The emergence of AI does not replace existing OHS priorities. Physical hazards and environmental risks remain critical.

But the scope must expand.

The next phase of workplace safety will require integration across three layers: physical safety, procedural safety and cognitive safety.

Without this integration, organizations risk advancing technologically while regressing in human capability.

OHS has always evolved in response to new risks. AI represents the next such shift. The challenge is not only technological. It is human.

If the goal of OHS is to ensure safe and effective performance, then the capacity to think, judge and act independently must remain central.

That capacity can no longer be assumed. It must be understood, supported and measured.

Only then can workplace safety keep pace with the systems that are reshaping it.

Featured

Artificial Intelligence

Webinars