Texas City and Buncefield: Will We Ever Learn?
These incidents evoked a paradigm shift in organizational behavior and influenced the creation and enforcement of new legislation to prevent similar situations from happening.
- By David Dana
- Jul 01, 2016
Ten years on, we take a glance at how and why incidents such as the Texas City, Texas, and Buncefield explosions occurred. We also briefly consider how vital safety culture and management systems are in fostering a safe working environment.
Picture this: It is early in the morning, and you are standing at a gas station refueling your car. You drift off into your own little world when, "click"—the pump stops; your tank is almost full. Only now, this is not your fuel tank. It is a tank at the Buncefield Oil Storage Depot in Hertfordshire, England, and that little "click," that stopping mechanism, was supposed to have been triggered by a device called the "Independent High Level Switch." (MIIB, 2008).
The word "independent" highlights that this was a backup system. Referring back to you again, while filling your car’s tank, you would have been the backup system—and if the "click" did not work, you would have been covered in fuel and then would have let go of the fuel pump.
The point I am trying to make is that workplaces are supposed to have multiple protection layers in place. Those layers should always be in place to protect the workers, but this must go beyond simply the technical aspects of safety management.
What Were the Causes of the Explosions?
The U.S. Chemical Safety Board (2007) highlighted that during a cold start-up, and after a series of communications blunders, a drain valve was shut for too long. This allowed heated fuel and vapor to build up to explosive levels. Level gauges and indicators also malfunctioned, and high-level alarms failed to activate to warn them of impending disaster.
With Buncefield, there was a failure to manage the storage of incoming fuel when a tank overfilled because its fill gauge, high-level switch, and alarm failed to operate. No other mechanism was available to alert the control room staff of the impending overfill. (MIIB, 2008).
Gauges and level alarms were known to be unreliable and were reported on numerous occasions in both locations. (MIIB, 2008; CSB, 2007). However, in the case of Buncefield, a lever specifically designed to be padlocked into place when fluid levels were high, or to alarm when levels were low, was not padlocked into place to enable a system shut-off.
But that is the thing: There was no padlock—it was assumed to be an anti-tamper device. (MIIB, 2008). This actually may indicate a potential design weakness and a systems information loss that also could have contributed to the incident.
Duguid (2008) researched more than 1,000 incidents in the process industry and established some very interesting facts. Nearly 50 percent of incidents occurred during start-ups and/or were maintenance-related in some way. A further 20 percent of the incidents were attributed to operator error, which could have been prevented by better design. This report is alarmingly accurate when we consider Texas City and Buncefield and look at the outcomes of other major events, such as the Piper Alpha, Bhopal, and Flixborough, for example.
Kletz (2009, p. 593) rightly stated that, "We need to look over our fences and see the many opportunities we have to learn from accidents." Perhaps shamefully, even with advice from leading researchers, it appears that many organizations, including those involved in the Texas City and Buncefield incidents, overlooked that important lesson.
However, a couple of failed switches and alarms were not enough to cause both accidents. The failures go much deeper. In the case of Buncefield, there was evidence that equipment was malfunctioning and not operational. Managers failed to act on that knowledge given to them by the field team. (MIIB, 2007). The BP refinery in Texas City systematically collected the evidence, but corporate entities failed to correct the issues. (CSB, 2007).
Furthermore, standard operating procedures were not adhered to in both locations. Shortcuts were the order of the day, and not one corporate entity highlighted that recording and audit systems were not fit for use.
It appears that Texas City and Buncefield embraced a culture of continuing at all costs. Reflecting on the MIIB’s 2008 report, Buncefield's cost and business pressures link their behavior to the Ellsberg Paradox (1961): production versus protection. They appeared to believe that investing less on safety was acceptable until the explosion.
However, Texas City was different. CSB (2007) annotated that, from the 1999 BP/Amoco merger up until 2004, fixed spending was cut by 50 percent, which severely impacted safety. Despite operational-level staff providing regular comprehensive reports, inspections, audits, and reviews to increase safety (from fatal incidents, equipment, lack of training, workload and fatigue, for example), nothing was done to resolve the situation—corporate management entities failed to provide what was urgently required.
On the contrary, evidence from Buncefield's audits most relevant to the causal factors of the overflow showed 94.7 percent compliance. (Booth, 2011, p. 15). But the audits raise a critical issue. Based on Buncefield's audit figure and knowing that nothing was actually remediated at the Texas City facility, it appears evident that both locations knew what had to be done; it's just that they failed to actually do it.
Due to poor corporate oversight and strategic decision-making, these incidents evoked a paradigm shift in organizational behavior and influenced the creation and enforcement of new legislation to prevent similar situations from ever happening.
Nevertheless, even with these measures in place, catastrophic events still occur. As Reason (1998, p. 293) highlights, even having robust systems and evident safeguards in place has not stopped accidents repeating themselves.
How Did Texas City and Buncefield Respond to the Incidents?
Unfortunately, gaining a thorough contrast and comparison of the emergency responses of both incidents is difficult. Considerably more information is available for Buncefield than with Texas City, but this could be due to Buncefield's large community response requirement (evacuations, public information-sharing, and media management, for example), rather than just with an on-site response. (MIIB, 2008; COMAH, 2011, p. 70). Most of this worked very well through regular inter-agency meetings and exercises.
Buncefield's blast did not cause fatalities. However, many received smoke inhalation injuries, including emergency responders. Texas City had many fatalities, but its emergency response was only discussed to the extent that it interfered with the investigation—there were only a few small, secondary fires and smoke. (CSB, 2007, p. 70).
It was a different story for Buncefield. No one envisaged the on-site emergency management team being harmed by what they were meant to control. Reflecting on my own experiences with emergency response, there were three main issues with that:
First, smoke-related information and advice for the responders and the public was lacking. An incident command also was not established to monitor and make decisions. Second, only smaller-scale incidents, such as train crashes, were planned for. Buncefield lasted for days and resources were stretched. Third, Buncefield planned for a worst-case scenario far smaller than what actually happened.
Due to no information being available, and again based on experience, the only real risk to the Texas City responders was navigating the debris field to extract all of the injured. There was no requirement for a community response, apart from an outside health care response.
Additionally, Buncefield had considerable groundwater contamination from water runoff. (COMAH, 2011, p. 5). This wasn't really due to any problem with the emergency response, rather the same loss of secondary and tertiary containment that allowed the fuel to spill in the first place.
What Recommendations Were Made Following the Incidents?
Both locations appeared negligent and showed a blatant disregard for policies, procedures, occupational safety and health, and the well-being and lives of their workforce. There was far too much emphasis on bowing to short-term cost pressures.
The CSB (2007) and MIIB (2008) reports highlighted primary containment as a vital technical recommendation for major hazard risk control. As a result, shut-off safety controls were shifted from manual, human controls to automated systems. Importantly, neither Texas City nor Buncefield appeared to have followed their country's safety integrity system standard protocols in the first place. Perhaps adhering to these may have prevented both of the explosions. (COMAH, 2011, p. 5).
Critically, the CSB (2007) and the MIIB (2008) documents are also very similar in the respect that both convey the strong requirement to improve safety culture in the business organization. Ultimately, the Texas City and Buncefield incidents did not identify anything new about major incident prevention, rather, they served to reinforce some important process safety management principles that have been known for some time.
Could Other Sectors Learn from These Incidents?
Disaster may have been avoided if appropriate process safety management had been implemented at Texas City and Buncefield, but that in itself would not have been sufficient to prevent the incidents. If we are to ensure lessons learned are translated into improved practices across other sectors, we need to periodically review them to eliminate problems.
Standardization would eliminate guesswork (Almklov, et al. 2014), as would openly sharing incident resources with authorities and experts. Additionally, perhaps an important lesson to take away from Buncefield is that subcontractor/subsidiary oversight must operate in an aligned and tight manner.
It takes time to change culture in most organizations, and this implies the necessity to create a detailed methodology in order to optimize the effectiveness of learning to ensure that accidents do not repeat themselves. A company can choose to invest wisely in a hard-hitting initiative from the executive level down or just idly sit by thinking it won't happen to them and that everyone and everything will be OK. That is a serious gamble. Some say that safety is not a choice, but it is exactly that: You either want to be safe, or you do not.
1. Almklov PG, Rosness R, Storkersen K (2014). When Safety Science Meets the Practitioners: Does Safety Science Contribute to Marginalisation of Practical Knowledge? Safety Science. 67, p. 32.
2. Baker J, et al (2007). The Report of the BP US Refineries Independent Safety Review Panel. (1), 30. Accessed Oct. 3, 2015.
3. Booth R (2011). How Hindsight Bias Distorts History - An Iconoclastic Analysis of the Buncefield Explosion -- Full Version. p. 15.
4. Control Of Major Accident Hazards (2011). Buncefield: Why Did It Happen? pp. 4-5.
5. Duguid IM (2008). "Analysis of Past Accidents in the Process Industries," Paper 87 and Handouts. Hazards XX: Process Safety and Environmental Protection: Institution of Chemical Engineers. p. 1070.
6. Ellsberg D (1961). "Risk, Ambiguity, and the Savage Axioms." Quarterly Journal of Economics. 75 (4): p. 643.
7. Keltz,T (2009). What Went Wrong (5th Ed). Case Histories of Process Plant Disasters and How They Could Have Been Avoided. p. 593.
8. Major Incident Investigation Board (2007). Recommendations on the Emergency Preparedness for, and in Response to, Recovery from Incidents. pp. 10-20.
9. Major Incident Investigation Board (2008). The Buncefield Incident. The Final Report of the Major Incident Investigation Board. www.buncefieldinvestigation.gov.uk/reports/index.htm#final. Accessed Oct. 4, 2015.
10. Reason J (1998). Achieving a Safe Culture: Theory and Practice. Work & Stress. Vol 12. (3). p. 293.
11. United States Chemical Safety Board (2007). Final Investigation Report; Refinery Explosion and Fire. www.csb.gov/assets/1/19/csbfinalreportbp. Accessed Oct. 4, 2015.
This article originally appeared in the July 2016 issue of Occupational Health & Safety.