The Journal of Business Continuity & Emergency Planning: July Editorial
By Lyndon Bird
Chief Knowledge Officer, DRI International
There are two risks that have concerned me for a while and to which the resilience community have not always given sufficient attention. They both relate to the increasing complexity of technology, firstly our failure to fully understand what that technology is really doing and secondly the capability of those charged with approving that technology to do so properly.
On Sunday 10th March 2019, an Ethiopian Airlines Boeing 737 MAX 8 crashed shortly after take-off from Addis Ababa airport. The 4-month-old aircraft crashed just six minutes into its flight to Nairobi, Kenya and at an altitude of only 450 feet. At least 35 nationalities, including several British, US and Canadian citizens were among those who died. The plane was carrying 149 passengers and eight crew members. No one survived.
Air crashes are always tragic, and thankfully very rare. However, one involving a new aircraft with the most advanced current technology on board initially seemed inexplicable, particularly as there was no suggestion of bad weather or pilot error. It soon emerged that the plane was the same model as one operated by Lion Air, which had plunged into the Java Sea minutes after takeoff from Jakarta, Indonesia in October 2018. There were 189 people on board, all died.
There was less media attention given to the Lion Air disaster and no definitive cause has yet been accepted. However, two similar catastrophic incidents within a six month period could not be ignored and regulatory authorities around the world immediately grounded this type of aircraft. The makers were not immediately convinced, citing differences between the two incidents and the Federal Aviation Administration (FAA), the US regulator, did not act immediately. In fact, the largest operator of this type of aircraft, the Dallas based Southwest Airlines initially indicated that they had no plans to ground their fleet. Inevitably they had to as the US regulators joined their international colleagues in grounding the Boeing 737 MAX 8 until further notice.
Chloe Demrovsky, President of DRI International was among the first to ask if the reticence of the FAA to act immediately could be associated with the closeness of the working relationships between plane manufacturer and themselves. She was not alone in asking this question; on March 19, 2019, the U.S. Department of Transportation requested an audit of the regulatory process that led to the aircraft’s certification in 2017 and soon the FBI launched an its own investigation into the FAA’s certification process.
Although final conclusions will take time, the general view is that the problem was a technical malfunction with an anti-stalling system, which in certain situations causes the nose of the plane to pitch forward. This flight-control feature was apparently activated in both of the fatal crashes.
I think this tragic set of circumstances illustrates the paradox that we are facing. We are now using advanced technology to largely eliminate the need for human decision making and operational control. The benefits are enormous in terms of efficiency, reliability and (usually) safety. However, once the technology becomes too complicated for anyone other than the designers to fully understand, there is no real way for any independent authority to assess or evaluate its efficacy.
In many ways this reminds me of the financial crash of 2007, when much of the problem was created by regulators not understanding the inherent risks in the sophisticated modelling algorithms being used by financial institutions. It all seemed to work brilliantly until it suddenly didn’t. The same might be said of any automated system and I for one remain highly dubious about the short-term viability of driverless cars and any critical activity without ultimate human control. The European Union has recently announced it plans to force all new cars to have their speed governed automatically after 2022 but I really question if this will make roads any safer. Machines still have little ability to act instinctively – they are perfect for everyday functionality but not so useful for the random unknowns that driving can generate, for example when an over the limit burst of speed can avoid a major collision.
This debate brings me round to a fundamental question about measurement and metrics. Clearly nearly everyone subscribes to the view that we need to be better at measuring the value of our resilience programs but what exactly are we measuring. I am sure that Boeing and the FAA in 2017 and major financial institutions a decade earlier believed in the quality and appropriateness of their risk, security, safety and continuity methods. They also tried to ensure that their people totally complied with them. Nevertheless, failures do occur so are we really measuring what is important? The argument has always been that if you have correct processes, always follow them and can demonstrate that to your auditors, you are systematically reducing risk and preventing the vast majority of failures. I agree with this but believe that we must find additional new metrics that help us assess our ability to manage catastrophic situations.
Journal of Business Continuity & Emergency Planning is the world’s leading journal on disaster recovery and emergency planning – publishing peer-reviewed articles and case studies written by and for heads of emergency, risk and resilience management. DRI International Certified Professionals in good standing receive a special 15% discount on subscriptions which includes both print and online versions. To subscribe now, simply click the link below.
https://www.henrystewartpublications.com/subscription/jbcep. To receive your discount please quote code DRI015
ABOUT THE JOURNAL: