Real knowledge and understanding do not come from raw data alone. It is the careful and skillful interpretation of data that leads to real understanding.

The Illusions of Risk Assessment

Real knowledge and understanding do not come from raw data alone. Careful and skillful interpretation of data lead to real understanding.

"The greatest obstacle to discovery is not ignorance - it is the illusion of knowledge."
-- Daniel J. Boorstin

Data are collected in an attempt to understand the complex world around us. However, once data are acquired, some people jump to the conclusion that they now have knowledge. This assumption of knowledge is often an illusion that obscures the truth. Raw data usually require skillful interpretation to unravel their true meaning. Those who use data should use careful analysis and avoid simple generalizations, common assumptions, and self-serving conclusions.

The following is a review of some of the principles and issues that should be taken into consideration so data will be correctly interpreted and properly utilized. This article is intended for those who collect data and also for those who make decisions based on their understanding of data.

Data Do Not Necessarily Mean Anything
An evaluator should be appropriately skeptical about data and not be biased toward finding a meaning. The following are examples of when data may not mean anything:

  • Numerical data. Numbers are often perceived as revealing truth because they are objective. However, just because data are objective does not mean they are meaningful. Could an employer's accident frequency decrease by 100 percent but be relatively meaningless? Of course. If an employer had one injury one year and no injuries the next, this 100 percent reduction could easily have occurred by luck alone, and therefore the difference is meaningless.
  • Relevance of data. Just because data are contained in a report doesn't mean they are relevant or the evaluator will interpret them correctly. For example, "data padding" is sometimes done by those who collect data because of a management philosophy that mistakenly equates quantity with quality. However, sifting through a mass of data that contain irrelevant information can lead to misunderstandings as well as just waste time.

Data Should be Presented in Context

  • Why It Happened. The simple occurrence of an event (injury, hazard, unsafe behavior, etc.) is often not enough information. Identifying an unguarded machine is only part of the story; knowing why the machine was unguarded is often critical in understanding how to accurately evaluate the event.
  • Severity of hazard. If a hazard is identified, it should also be ranked in some way to indicate how serious a problem it really is.
  • Severity of injury. Looking only at the severity of an injury can be misleading when making judgments about the quality of an employer's safety program. A death may occur and the employer may not have done anything wrong. A relatively minor cut finger may reveal that the employer prohibited the use of machine guards.
  • Likelihood of occurrence. The significance of a hazardous condition is partly dependent upon how likely it is to lead to an accident. For example a machine may be unguarded, but the likelihood of an actual injury will depend upon a number of additional factors. An unguarded machine in an area where employees seldom work is different from one where employees work continuously.
  • Compared to what? Safety inspections often report just the hazards that are observed and not the conditions that are safe. A report that identifies one unguarded machine may indicate a problem or a success, depending on the number of machines that are properly guarded and the type, size, and past history of the employer.

Collection of Data
An inspection is often an important part of a safety evaluation. How the inspection data are collected is critical in determining whether the information will be an accurate representation of the risk. The more variable the type of work and the more locations that are involved, the greater the need for multiple inspections to accurately evaluate the business.

Organization of Data
How data are organized makes a big difference in the logical interpretations that can be made later. For example, if all back injuries are lumped into a single category, some people would jump to the easy conclusion that numerous back injuries represent a trend involving incorrect manual lifting. However, a more detailed evaluation might reveal the back injuries were actually the result of several different unrelated factors, such as falls, prolonged sitting, vehicle accidents, lack of mechanical lifting devices, etc.

Loss Ratio Problems
Loss ratios are commonly used to make judgments concerning an employer's relative risk. A loss ratio typically compares the cost of worker's compensation claims to the premium paid for worker's compensation insurance (claims costs divided by premium).

  • The severity, and hence the cost, of injuries is highly subject to chance factors. Therefore judgments about the safety performance or risk of an employer should include other measures besides just the loss ratio.
  • Loss ratio comparison problems also occur if the premium side of the equation is not evaluated. Changes can occur to insurance rates, wages, worker's compensation benefits, and insurance discounts or surcharges. These changes affect the premium side of the equation and can either increase or decrease the loss ratio. For example, if an employer gets an insurance premium discount, the loss ratio for the next year can actually go up even if the actual losses go down.

The Problem of Small Numbers
Ratios, percentages, frequencies, and raw numbers need further analysis if the database is relatively low. For example, many people are tempted to point to even a small increase or decrease in the type of injury and declare they have identified a trend. The smaller the database, the more the need for supporting evidence.

The Problem of Large Numbers
In some situations, grouping together too much data can be a problem. For example, calculating an average that includes many years of data may obscure changes that have occurred recently. If significant changes have occurred over time, it would be more valid to calculate averages separately for each period of time that logically should be grouped together.

Frequency of Injury Problems
Very high numbers or very low numbers are almost always interpreted as being important. This is not always true.

  • Low number may not be low. An employer may have a low number of reportable injuries, but this does not necessarily mean the employer has a good safety program. If the employer is small enough or the time period being evaluated is short enough, a low number of injuries could have occurred merely by luck.
  • High number may not be high. Multiple injuries can occur from one catastrophic incident. While this may produce an injury frequency rate much higher than expected, it does not have the same meaning as the same rate produced by separate incidents.
  • Low number but high risk. Lack of machine guarding can result in very severe injuries and especially harsh legal consequences. Therefore, even a low number of this type of injury is usually significant.
  • High number but low risk. A high number of injuries should not be given undue importance if it is unlikely that future injuries will be serious. For example, a sheet metal shop may have a lot of cuts due to handling sharp metal. This type of injury is not likely to cause serious consequences and therefore it is not as significant as many other types of injuries.
  • Comparisons between employer and database. The validity of any comparison will be largely determined by how well the database represents the employer being compared. For example, if the database is the frequency rate for all agricultural employers, this would not be very helpful because there are many different types of agricultural employers, and some of them have significantly different frequency rates.
  • Comparisons between different time periods. If the number of employees and the type of work is relatively constant between given time periods, the number of injuries can be compared directly. Changes often occur, however, that make direct comparisons invalid.

Strategies for Using Frequency of Injury Data
Injuries per $1,000 of payroll. When simple numerical comparisons are not valid, a rate can be calculated and used for a more legitimate comparison. For example, the number of injuries can be converted into the "number of injuries per $1,000 of payroll." If the payroll was $800,000 and two injuries occurred, then the injuries per $1,000 of payroll would be equal to .0025 (2 divided by 800 = .0025).

Injuries per hours worked (incident rates). Incident rates have been calculated for selected businesses on a national level and for some states. This is very useful because you can calculate your incident rate and then compare it to national or state incident rates. The incident rate is the number of "recordable" injuries and illnesses per 100 full-time workers. The formula for incident rates is (N/EH) x 200,000.
N = Total number of recordable injuries and illnesses
EH = Total number of hours worked by all employees during the year
200,000 = Base for 100 equivalent full-time workers (working 40 hours, 50 weeks per year)

A "recordable" injury or illness is one that is required to be recorded according to OSHA rules. In general, a recordable injury or illness is one that involves professional medical treatment that goes beyond "first aid."

An incident rate calculator is available at http://data.bls.gov/IIRC/.

NAICS and SIC (industry codes): http://www.osha.gov/oshstats/naics-manual.html.

Compare your calculated incident rates with state or national incident rates. National: http://www.bls.gov/iif/home.htm.

Are Your Decisions Justified?
If a company has a loss ratio of 40 percent for one year and 25 percent for the next year, this of course represents a monetary difference of 15 percent. Does this difference justify any kind of significant decision? For example, based on this 15 percent improvement, would an insurance company be justified in rewarding the employer with a discount? Consider the following examples.

  • Example of large difference. The meaning of the following example will be clear because the difference is so extreme. If a business with 100 employees has 30 injuries one year and in next year has only one injury, this is a 97 percent difference. It is obvious that you can confidently assume that this is a real difference and not merely the result of random variability (chance).
  • Example of large database. If a business with 10,000 employees has 500 injuries one year and in the next year has 416 injuries, this is only a 16.8 percent reduction in the number of injuries. However, it probably represents a real difference. The large amount of data involved makes it unlikely that chance factors alone could have produced this difference.
  • When is a difference or a database large enough? In many cases, evaluators just make a guess based on experience. However, the way to be sure of the validity of your judgment is to use a statistical test. In the previous example of a large database, a statistical test called Chi-Square shows that the probability is less than 1 percent that the difference between 500 and 416 occurred just by chance. Therefore, you can be 99 percent confident that the reduction from 500 to 416 injuries has been achieved by something other than chance.
  • Evaluation software. A computer program may provide you with data such as the expected frequency of injuries for the payroll reported by a specific employer. However, how the expected frequency is calculated makes a big difference in how relevant the figure is. Is the expected frequency based on using a broad "standard industry classification" governing payroll class, or is each payroll class listed separately? Even when you have a good apples-to-apples comparison, you still have to deal with the issue of whether differences between the expected frequency and the actual frequency are "real" differences. The point is, just because a software program provides a tool for evaluation doesn't necessarily mean it can be properly used without in-depth understanding and additional analysis.
  • Statistical tests. How to conduct the statistical analysis of data is beyond the scope of this article. However the value of using statistics is an important principle that should be recognized and utilized. Some data cannot be truly understood without the use of statistical tests.
  • Chi-Square test example. The following Chi-Square Test is presented to demonstrate how statistics are important in better understanding data.

Chi-Square Test
One of the statistical tests used to investigate whether sets of data are really different is the Chi-Square Test. Luckily, electronic calculators will automatically do the complex calculation. You can use the Chi-Square calculator at the following site: http://faculty.vassar.edu/lowry/VassarStats.html1. After opening this site, select "Frequency Data," scroll down to the section entitled "For a 2X2 Table of Cross-Categorized Frequency Data, and then select "Version 1." Enter your data using the examples below as a guide.

Examples of how to organize and enter data in Chi-Square Table2:

Business #1
2006 Year: Number of employees who did not have an injury: 900
Number of employees who did have an injury 100

2007 Year: Number of employees who did not have an injury: 861
Number of employees who did have an injury 64

Probability (P) = 0.019457 = 1.9 percent (rounded off)

If you enter the data from the "Employer 1" example into the calculator, the following result will be calculated: Probability (P) = 0.019457. Converted to a percentage, 0.019457 equals 1.9 percent (rounded off). This means there is only a 1.9 percent probability that the observed difference is due to chance factors. Therefore, you can say with a high degree of confidence (98.1 percent) that the observed difference is a "real" difference.

Business #2
2006 (Year 1): Number of employees who did not have an injury: 500
Number of employees who did have an injury 90

2007 (Year 2): Number of employees who did not have an injury 510
Number of employees who did have an injury 75

Probability (P) = 0.263 = 26.3 percent

In the above example, it looks like there has been an improvement because injuries have been reduced from 90 per year to just 75. However, because the total number of employees is lower in this example, the possibility of chance factors is increased. A Chi-Square calculation results in a probability of .263. This means that you have a 26.3 percent chance of being wrong if you conclude the reduction of injuries from 90 to 75 indicates anything beyond random variability.

The Problem of Causation
Data are often used to support the contention that one thing has caused another thing to occur. In many situations, it is not "scientifically" valid to make such assumptions. When multiple variables are present that could be the causative agent, skillful collection of data and control of all of the variables is necessary. In many cases it is just not practicable to control all of the variables. Consider this example for a hypothetical business:

  • The incident rate in 2005 was 10.
  • At the beginning of 2006, a new loss control program was started.
  • By the end of 2006, the incident rate dropped to 5.
  • An analysis of the difference between 10 and 5 determined that this difference is real.
  • The question now is, "Did the new loss control program cause the incident rate to go down?"
  • Before drawing any conclusion, other possible explanations should be considered. For example, if the business moved into a new facility in 2006, this could be a valid alternative explanation as to why the incident rate went down.

Conclusion
Real knowledge and understanding do not come from raw data alone. It is the careful and skillful interpretation of data that leads to real understanding. Analysis and interpretation take time and effort, but the quality of the knowledge and the validity of any subsequent decision are well worth the effort.

About the Author

Dan Hartshorn ([email protected]) is a retired senior loss control consultant. Professor Richard Lowry of Vassar College was kind enough to review an early version of this article and make some important corrections concerning statistical principles.

Featured

Artificial Intelligence