Why Warnings Fail

Expecting people to behave contrary to innate predisposition is futile. The better approach is to fit the design to likely behavior.

IN theory, warnings are an important means for accident prevention. They reveal hidden hazards so the user of a product or facility can avoid injury. Unfortunately, warnings frequently, and some would say usually, fail to affect behavior.

The analysis of warning failure should provide insight for warning improvement. Some have examined deficiencies of the warnings themselves. For example, much research has tested the effects of format, shapes, colors, signal words, layout, etc. Laboratory testing of warning format is probably a popular focus, in part, because it is easy to do. Moreover, an effective, standard format would be the engineer's prized "silver bullet," the simple comprehensive answer to a complex problem. However, the search for this particular silver bullet has proved illusive. While there are shelves of laboratory research recommending various format attributes, the data are conflicting and have questionable empirical validity1. Moreover, there is little real-world evidence2 that following guidelines for a standard format is important.

Of course, warnings should be legible, intelligible, and complete (based on thorough hazard analysis). Beyond this, however, the lack of compelling data suggests format and even content are likely minor variables and that the real causes of warning failure lie elsewhere. After all, if the user fails to notice the warning, does not consider risk at all, or thinks it not worth the trouble to read or comply with a warning, then its precise rendering is unlikely to affect the outcome. Even "good" warnings can be ineffective.

The real causes of warning failures are the same as those that determine any other behavior: human mental limitations and predispositions and the extraordinary ability of humans to adapt. Below, I discuss how these factors can induce warning failure. I divide the discussion into three general categories: perceived utility, adaptation, and risk underestimation/nonestimation.

Perceived Utility
Likelihood of a behavior usually depends on perceived utility. Users perform a mental arithmetic, determining net gain by subtracting the cost from the benefit. When confronted with a warning, users apply this calculation to two decisions.

1. The user decides whether to invest the effort into reading the warning. Mental costs increase when fonts are small, low contrast, and/or italicized, and when there is lateral masking because of insufficient white space. Ironically, warnings are often rendered in all capitals to make them more conspicuous. In fact, all-capital text has very poor legibility and discourages users from reading.

Users also are unlikely to read long messages. Warnings should be as brief as possible, but length depends partially on the audience. If the person reading the warning is an experienced user, he probably already knows about the hazard and need merely be reminded. For novice users, the warning may require more detailed information. Lastly, likelihood of reading the warning increases with greater credibility and higher perceived risk, issues that I discuss below.

2. Warning compliance will probably block easy attainment of a desired goal, so the user must decide whether the cost of goal loss or increased effort is worth the gain in safety. The user is likely to consider whether there is an alternative means for reaching the goal. The lower the cost of switching to an alternative, the more probable the compliance.

For example, a nearby library displays "Do Not Use Cell Phone" signs scattered about, but it also sets aside a small room on each floor where cell phones may be used. The arithmetic favors compliance because cost is cheap--the user can achieve his aims by walking a short distance. Conversely, costs would be greater, and compliance lower, if the user had to leave the building to use the cell phone. Warning effectiveness generally can be increased by understanding the user's goals and by proving a safer, alternative means for achieving that goal, or by providing an alternative goal.

Risk Underestimation and Nonestimation
Even without the benefit of a Professional Safety subscription, Shakespeare was able to observe that, "Best safety lies in fear." Research has proven the Bard correct, demonstrating users are more likely to both read and to comply with a warning if they perceive significant risk. However, risk perception is highly fallible. People may underestimate risk or, more commonly, simply fail to estimate risk at all.

1. Users rely on their direct observations. If a product contains an obvious hazard (sharp edges; flame; moving, mechanical parts, etc.), then users probably will behave self-protectively even when no warning is present. They also will be more likely to seek and to read warnings and instructions for avoiding the hazard. Conversely, users who fail to perceive a hazard directly are less likely to notice the warning. Because the common purpose of warnings is to inform the user of hazards that are not open and obvious, the situation is a classic Catch-22: Warnings are least effective when most needed.

2. Users fail to consider the risks. When performing routine tasks, people do not usually consider possible risks. For example, one study3 found that 74 percent of accident victims had believed they were running no risk. Another study4 surveyed non-prescription drug users and concluded that "consumers portrayed their non-prescription use as a routine, taken for granted activity, relatively divorced from active consideration of risk."

Users learn that warnings are frequent but accidents are few. People who have repeatedly performed a task soon learn the contingencies, a behaviorist term meaning the relationship between response and outcome. Users are bombarded by warnings concerning hazards that never occur. People speed, ignore "no parking" and "no diving" signs, don't wear seatbelts, and take medications, etc. with impunity. Highways are frequently signed "Slow--Work Zone" followed by miles of orange and black barrels but with no sign of construction. Moreover, people often assume that the product contains a very large built-in safety margin.

One rule of behavioral contingencies is that immediate and certain consequences are more powerful in changing behavior than are delayed and uncertain ones. This likely explains why Out of Order signs, "road closed," "use other door," etc., are among the most effective warnings. They signal negative consequences that are both immediate and certain. On the other hand, cigarette packages contain warnings about a far greater hazard, but one which is both delayed and uncertain. Writer Dave Barry has suggested cigarette smoking would end overnight if the package warnings said, "WARNING: cigarettes contain fat!" This quip might be good for a laugh but makes a valid point: Users comply more where the risk (gaining weight) contingency is immediate and likely.

The correlation between non-compliant behavior and actual hazard is often highly uncertain, so people become skeptical about warnings as a class of information. Moreover, warnings often have low credibility because users believe they may exist, not to promote safety, but from fear of litigation or from some "do-gooder" mentality that is simply trying to control them.

It is sometimes difficult to convince people in authority that more warning is not necessarily more effective warning. Users become less likely to read ever-increasing lists of warnings rendered in ever-decreasing print size. Further, frequent warnings for low probability hazards cause a "boy who cried wolf syndrome" that destroys credibility.

Overwarning also can mask contingencies and confuse the user about real and unlikely hazards. One diving accident occurred at a lake that had a pier marked with many "No Diving" signs. In fact, the water surrounding the pier was deep enough for safe diving in most places, as the people who regularly used the pier had discovered. They had routinely dived into the water with no difficulties and had learned there was no negative contingency between behavior and outcome. Unfortunately, a regular user dived at the one shallow location and suffered spinal injuries. He had misperceived the true risk because of his experience and because the signs had masked the true risks. There was no distinction between the warnings of significant risk at the shallow locations and of minimal risk at deeper locations.

3. Experienced users develop a sense of control. They consequently perceive less risk because they believe the risk is controllable. If a sign says "No Diving," for example, the person least likely to comply is an experienced diver5 who believes he can control dive angle for safe entry into the water and avoid the hazard. The belief that the outcome is controllable translates to less fear of consequences.

4. The things that scare us and the things that kill us differ significantly. Even when considering risk, people often make inaccurate assessments. Human reasoning has several "cognitive shortcuts" that affect decision-making. One is "confirmation bias," the tendency to seek information that confirms preconceived belief and to avoid contrary evidence. Once a user believes there is little risk, he will tune out warnings.

The "availability heuristic," the tendency to make judgments based on the information most readily in mind, also impairs risk estimation. Because most people have little direct experience with significant accidents, they gain much of their risk estimation knowledge second-hand. The media focus on rare, dramatic, and exotic risks--new Asian and African diseases, terrorist bombings, nuclear power plant explosions, etc.--while largely ignoring the more mundane but significant hazards, such as driving. The avalanche of scary stories distorts the perception of risk that most people face in everyday life.

Adaptation
Perhaps the most important cause of warning failure is adaptation. Experience both removes risk as a consideration and makes the warning invisible. Learning the contingencies is one example of the way users change with repeated exposure to a product or environment. In addition, there are several other adaptation effects: visual routines, "inattentional blindness," and automatic/scripted behavior.

1. Experienced users develop "visual routines"--preprogrammed eye movement sequences. Experienced users become very precise at narrowly directing attention to task-relevant information from moment to moment as the task evolves. Attention usually moves in lockstep with the fovea, so information that is not located at or near a fixation point will fall in peripheral vision and away from the center of attention. The user is oblivious to information that is not located at precisely the right location at precisely the correct time. Warnings will be more effective if they are somehow integrated into the normal task operations6.

2. Users become "inattentionally blind." Even information located at the fixation point may go undetected. Failures to see information at the fixation point are so common that they have their own name: "look but fail to see" (LBFS) errors. By one account7, LBFS errors cause of 11 percent of all automobile accidents.

These errors occur because humans receive far more sensory information than they can cognitively process. In order to perform efficiently, they learn to attend relevant information and to filter the rest away. Users develop expectation and confirmation bias so they become "inattentionally blind"8 to information that they have categorized as unimportant. Warnings are not generally relevant to task completion, especially after the user has learned the contingencies.

3. People adapt by switching from "controlled" to "automatic" behavioral modes. Beginners typically operate in a controlled mode, one that requires conscious thought and focused attention. Their behavior is slow and inefficient because decisions consume significant attention and effort. The user is an information-seeking mode, searching for input that can help perform the task.

With experience, the user switches to automatic mode, where little conscious thought occurs. They have learned to efficiently filter away irrelevant sensory input and often fail to notice new or unexpected information. Automatic behaviors may be very rigid, as when a factory worker performs a repetitious manual task or a computer operator performs data entry. The behavior is often said to rely on "muscle memory" because responses are linked into a chain, where the movement for one response triggers the next. Once initiated, the response chain simply runs off without conscious supervision, as if it were a mental servant sent to carry out a task autonomously.

Automaticity is not an all-or none phenomenon but rather has gradations. Most routine tasks are "scripted," containing a standard sequence of actions and "props." For example, starting a car is scripted as: "take key from pocket, open door, sit in driver's seat, put key in starter," etc. Although less rigid and relying less on muscle memory, this scripted behavior is also routine and limits attention to relevant information.

Once in automatic mode, users are unlikely to notice warnings. The obvious solution is to prevent automatic behavior from developing, but this would come at a high cost. Automatic behavior is needed for skilled and productive behavior, so any interference lowers productivity. As frequently happens, there is a trade-off between safety and efficiency.

Conclusion
There are limits to what warnings can do, even under the best conditions. Some users will consciously perceive the warnings, accurately assess risk and still fail to change behavior. They may enjoy risk, operate under time or cost pressures, believe the risk doesn't apply to them, or be overly optimistic in their sense of control.

Given this constraint, however, warnings can be improved by learning from failures. Format may play a role, but it is doubtful many accidents occur because a warning is yellow rather than red or says "Caution" instead of "Warning." The more likely scenario is that users simply do not notice the warning and/or do not consider the risk. Users learn warnings contain little useful information and it is more useful to direct attention elsewhere. They learn to develop automatic behavior. They learn warnings have little credibility and risk can be better judged by employing their own senses. They learn that, given the low probability of real harm, the cost of compliance is too high.

The view that warning failures arise from normal human psychology has an important corollary: Any safety intervention must accept of people for what they, not what we wish they were. The mental processes that control behavior operate largely outside of conscious awareness, and people cannot readily change them even if they try. Ironically, this is fortunate because the same processes of adaptation that cause warning failure are also necessary for producing efficient and skilled performance9.

Perhaps the ultimate cause of warning failure is wishful thinking. Designers of warnings and other artifacts often assume people should and will respond in some idealized and prescribed manner. It would be very convenient if users would approach a task but look for possible hazards, carefully scrutinize warnings and instruction, and then adhere to rules and regulations. Unfortunately, this is not how people are likely to act. Expecting people to behave contrary to innate predisposition is futile and is usually a failed strategy. The better approach is to fit the design to likely behavior. As psychologist Harry Kay10 said, "We shall understand accidents when we understand human nature."

References
1. Green, M. (2001), Caution! Warning Literature May Be Misguided, Occupational Health & Safety, 16-18, December.
2. Young, S., Frantz, P., Rhodes, T. & Darnell, K. (2002). Safety signs & labels: Does compliance with ANSI Z535 increase compliance with warnings. Professional Safety.
3. Weegels, M., & Kanis, H. (1998). Misconceptions of everyday accidents. Ergonomics in Design, October, 11-17.
4. Bissell, P., Ward, P. & Noyce, R. (2000). Mapping the contours of risk: Consumer perception of non-prescription medicines, Journal of Social and Administrative Pharmacy, 17, 136-142.
5. Goldhaber, G., & deTurck, M. (1988). Effectiveness of warning signs: Gender and familiarity effects. Journal of Product Safety, 11, 271-284.
6. Frantz, P. & Rhodes, T. (1993) A task-analytic approach to the temporal and spatial placement of product warnings, Human Factors, 35, 719-730.
7. Brown, I. (2001) A review of the "look but fail to see" accident causation factor, Proceedings of the Eleventh Seminar on Behavioural Research in Road Safety, 145-144. London: Department of the Environment, Transport and the Regions.
8. Green, M. (2002). Inattentional Blindness, OHS Canada, 23-29, Jan/Feb.
9. Green, M. (2003) Skewed View: Accident Investigation, OHS Canada, 24-29, June.
10. Kay, H. (1971). Accidents: Some facts and theories. P. Warr Psychology at Work (pp. 121-145). Baltimore: Penguin.

This article originally appeared in the February 2004 issue of Occupational Health & Safety.

Product Showcase

  • SlateSafety BAND V2

    SlateSafety BAND V2

    SlateSafety's BAND V2 is the most rugged, easy-to-use connected safety wearable to help keep your workforce safe and help prevent heat stress. Worn on the upper arm, this smart PPE device works in tandem with the SlateSafety V2 system and the optional BEACON V2 environmental monitor. It includes comprehensive, enterprise-grade software that provides configurable alert thresholds, real-time alerts, data, and insights into your safety program's performance all while ensuring your data is secure and protected. Try it free for 30 days. 3

Featured

Webinars