Data Privacy

As Employers Push AI Adoption, EHS Faces New Pressure to Use Technology Safely and Responsibly

Companies are rapidly making AI use an expectation for employees, but the shift brings new challenges for EHS professionals who must balance productivity gains with protecting sensitive personal, operational, and proprietary information.

Across many industries, a clear shift is underway: companies increasingly expect employees to use AI in their daily work. The message is simple and often repeated in different forms by senior leaders – if you are not using AI, your competitors are, and they are working faster and more efficiently because of it. AI is no longer a side experiment or an optional gadget; it is rapidly becoming part of normal, expected work, much like email or spreadsheets once did.

One clear example comes from Meta, the parent company of Facebook, Instagram, and WhatsApp.

An internal memo, reported by Business Insider, says that from 2026 Meta will use a new metric in performance reviews called “AI-driven impact”.

Employees will be rated on how they use AI to do their work better.

In 2025, those who show “exceptional AI-driven impact” will already get extra recognition, and Meta has created an internal “AI Performance Assistant” to help them write self-reviews.https://www.businessinsider.com/meta-ai-employee-performance-review-overhaul-2025-11

 Other major employers, including Google and Microsoft, have openly told their teams that AI adoption is becoming a baseline expectation. https://www.businessinsider.com/google-employees-use-ai-or-get-left-behind-gemini-2025-8 As a result, the pressure to incorporate AI into everyday work is steadily spreading into non-technical functions — including environment, health, and safety (EHS) roles. At the same time, however, the same organizations warn employees to be extremely careful about what information they upload into AI systems, whether public or paid.

Even robust, paid AI platforms are still cloud services. They store the history of prompts and answers, know which account is using the system, and in theory can be affected by large-scale breaches, technical failures, misconfigurations, or legal requests. Free public models may introduce additional risk if their terms allow data to be reused for training or analytics. In practice, many users simply click “Agree” on the terms of service without reading the details, and may not realize they have given the provider the right to analyze or reuse their uploaded content. No provider can offer an absolute security guarantee. This means the safest protection for truly sensitive information is never to upload it at all.

A strict “zero-risk” approach, similar to what is used for radiation or certain chemical exposures, does not directly apply here. If an organization tries to reduce the risk of data exposure to zero by banning AI, it will likely fall behind competitors who are willing to accept some risk in a structured and well-regulated way. For EHS professionals, this dual message — “use AI actively” yet “do not expose sensitive information” — creates a complicated but navigable challenge. The better strategy is to clearly define what is truly critical and must never be uploaded – sensitive personal data, trade secrets, unique equipment and processes, real contract prices – and what can be transformed into a safe form through anonymization and generalization.

At this point, many EHS professionals make a common mistake: they assume that EHS work does not involve “real secrets,” only safety incidents, training data, and compliance reports. In reality, EHS touches at least two important categories of sensitive information: personal sensitive information about employees and corporate sensitive information about processes and assets.

Personal sensitive information, or in other words, personally identifiable information (PII) includes any data that, if disclosed, could harm an individual, expose them to discrimination or fraud, or violate their privacy. Obvious examples like passport details, Social Security numbers, health insurance numbers, driver’s license details, or banking information are often handled by HR rather than EHS. But EHS departments routinely receive medically relevant information as part of hearing conservation programs, audiometric test results, respiratory medical evaluations, injury and illness reporting, incident investigations, and discussions about disability or mental health in the context of returning to work. All of this is sensitive by nature.

In the United States, the key reference for defining and protecting PII is NIST Special Publication 800-122, the “Guide to Protecting the Confidentiality of Personally Identifiable Information (PII).” https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-122.pdf

It is not a law but a federal standard that many agencies refer to, including OSHA and EPA. It explains what counts as PII, what is considered PII, and what measures are recommended to protect it, such as access control, encryption, and data minimization.

For EHS, the practical implication is straightforward. If you are uploading an incident description to an AI tool to improve the wording, or using AI to help structure a root cause analysis, or analyzing training data, the materials you upload should not contain names, internal IDs, signatures, or other direct identifiers.

The safest way to work with AI on such content is to create a sanitized copy where employees are replaced with neutral labels such as “Employee A,” “Operator 1,” or “Supervisor,” and where medical details are reduced to only what is strictly needed for the analysis.

In a small facility, even a job title can be identifying if there is only one person in that role.

I once encountered this at my plant. When one employee sustained a very characteristic injury, I drafted a safety communication in the usual way, taking out the person’s name and any direct identifiers. My EHS supervisor pointed out that, in a site of this size, the combination of the story, the type of injury, and the job context still made the worker clearly identifiable. It could easily feel like unwanted public attention for that person. In the end, we rewrote the communication as a general warning about the potential hazard and the preventive measures, without mentioning the individual incident at all. It was a useful reminder that anonymization is not just about deleting names, but about making sure a real person cannot be reconstructed from the remaining details.

The second major category is corporate sensitive information, often referred to in practice as Non-Disclosure Agreement (NDA) content. This is not a legal term from any statute, but a working label for any content that may not be shared externally under a Non-Disclosure Agreement or internal confidentiality rules. In manufacturing and engineering environments, this typically includes anything that reveals how the production process is designed and why it gives the company a competitive advantage: special fixtures and jigs, proprietary tooling, unique line configurations, internal engineering solutions, R&D lines and lab setups, exclusive processes, and unique materials or formulations.

Here it is important to distinguish between different types of equipment and images. As a rule, photos or descriptions of unique, specially designed equipment that is part of the core production process should never be uploaded to an AI system. The same caution applies to any image that clearly exposes a proprietary layout, a non-standard process, or details of internal technology. However, there are also neutral, generic situations where using images can be acceptable. For example, a standard lathe located in a mechanical workshop and not directly involved in the production of a proprietary product can be used as an illustration for a general risk assessment or safety training, provided that the frame does not show anything extra around it. Before uploading such an image, someone should carefully check what else is visible: room layout plans on the wall, process flow diagrams, labels and internal tags, unique fixtures, specialized tools, inventory locations, or any other elements that might unintentionally reveal how the plant is organized. If the background is clean and the equipment itself is not unique or part of a critical process, a photo like that may be acceptable to use. In other words, generic equipment from a non-sensitive workshop can be uploaded after a careful “frame check,” while unique production equipment and process-revealing images are off limits.

The same logic applies to financial, legal, and contractual documents. A contract with unique pricing for waste disposal, for example, is commercially sensitive. It would not be acceptable to upload the original document with real customer names, prices, volumes, or negotiated conditions. A neutral template, on the other hand, with prices and names removed or masked, can often be used safely if the purpose is to ask AI for help with structure, language, or comparison of standard clauses.

Before using any public AI system (ChatGPT, Copilot, Gemini, and others), employees should also verify two things: their organization’s policy on AI use and whether a private or enterprise AI platform is already available. Many companies now provide enterprise-grade AI tools that keep all data in a protected silo and do not use customer information to train future models. If such a system exists, it should always be the first choice because it significantly reduces the risk that organizational data could be leaked, misused, or exposed through a system failure. Public AI tools, even paid versions, remain cloud services with terms that vary across providers, so aligning AI use with internal policy is a critical part of responsible EHS practice.

When we set clear boundaries, EHS professionals can use AI as a safe and useful tool. AI can help improve the wording of anonymized incident summaries, suggest structure for procedures based on existing company rules, summarize non-sensitive data trends, and draft training materials using public regulations instead of confidential internal documents.

If we focus only on security, the strictest advice would be “do not upload anything at all.” This would remove the risk of data exposure, but it would also remove most of the benefits. In practice, companies that refuse to work with AI will be slower and less flexible than those that use it in a controlled way.

AI can speed up data processing, reveal patterns in incident statistics, support report writing, and generate ideas for risk controls. In a world where companies like Meta link career growth to “AI-driven impact,” safe and smart AI use is becoming part of the EHS skill set, alongside OSHA requirements, NIST guidance, and internal standards. The goal is not zero risk, but reasonable, managed risk.

Featured

Artificial Intelligence

Webinars