HOW TO PREPARE FOR A CRISIS
Morning of May 18, 2012, at
precisely 11:05, Nasdaq planned to execute the first trade in in Facebook’s
hotly anticipated initial public offering. The opening trade was an auction of
sorts — buyers and sellers entered orders, and Nasdaq calculated a price that
would cause as many shares as possible to change hands. As the start of trading
approached, hundreds of thousands of orders poured in. But when 11:05
arrived, nothing happened.
With billions of dollars
poised to change hands and the spotlight on, Nasdaq managers scrambled to
diagnose the problem, dialing into an emergency conference call to troubleshoot.
After a few minutes, a group of programmers narrowed the problem down to
something called the validation check, a safety feature they built into the
computer program years earlier. Despite the check’s warning that something was
amiss, managers decided to push forward anyway.
When the validation check
was removed, trading started, but the workaround caused a series of failures.
It turned out the check had initially picked up on something important: a bug
that caused the system to ignore orders for more than 20 minutes, an eternity
on Wall Street. Traders blamed Nasdaq for hundreds of millions of dollars of
losses, and the mistake exposed the exchange to litigation, fines, and reputational
costs.
Most of us don’t oversee
huge IPOs, but sooner or later, every team faces an unexpected crisis:
technology breaks, a competitor makes a disruptive move, a promising project
fails, a key employee quits, consumers have a negative reaction to a new
product — the list goes on.
Some teams are good at
handling the unexpected, but most aren’t. Under stress and time pressure, it’s
difficult to stay calm, diagnose a problem, and come up with solutions.
Over the past five years,
we have studied dozens of
unexpected crises in all sorts of organizations and interviewed a broad swath of people —
executives, pilots, NASA engineers, Wall Street traders, accident
investigators, doctors, and social scientists — who have discovered valuable
lessons about how to prepare for the unexpected.
Here are three of those
lessons.
1.
Learn to stop.
When faced with a surprising event, we
often want to push through and keep going. But sticking to a plan in the face
of surprising new information can be a recipe for disaster. This has played a
role in many failures, from the Facebook IPO to the Deepwater Horizon oil
spill.
Instead, managers need to
foster norms that help people overcome the sense of defeat that comes from
halting an ongoing process or giving up on a planned course of action. A young
trader on Wall Street, for example, told us that he’d never received as much
praise from senior managers as when he stopped an apparently profitable trade
after realizing that he didn’t fully understand it. Such feedback helps create
norms that, one day, might prevent an unexpected event from turning into a
meltdown.
It’s even better if the
praise is public. Consider this story shared by researcher Catherine Tinsley and her colleagues:
“An enlisted seaman
on an aircraft carrier discovered during a combat exercise that he’d lost a
tool on the deck. He knew that an errant tool could cause a catastrophe if it
were sucked into a jet engine… He reported the mistake, the exercise was
stopped, and all aircraft aloft were redirected to bases on land, at a
significant cost. Rather than being punished for his error, the seaman was
commended by his commanding officer in a formal ceremony for his bravery in
reporting it.”
This is an incredible
response: Celebrate the guy whose mistake forced us to call off the whole
exercise and scour every inch of a huge deck to find a lost tool! Would that
happen in your organization? Would you celebrate someone who told you to
abandon your plan because he’d made an error?
Symbolic gestures like the
deck ceremony convey a powerful message: If you see a problem with pushing
ahead, then stop. Stopping gives us a chance to notice unexpected threats and
figure out what to do before things get out of hand.
2. Do,
monitor, diagnose.
Sometimes stopping isn’t an option. If we
don’t keep going, things will fall apart right away. What can we do then? To
answer that question, University of
Toronto professor Marlys Christianson painstakingly
analyzed video recordings of dozens of emergency department teams that
participated in a simulation. The exercise involved a medical manikin hooked up
to a computer that simulated the responses of a real patient.
All teams had to manage the
same crisis: a boy with a history of asthma was brought into the hospital and,
a few minutes later, stopped breathing. Doctors sealed a bag-valve mask over
the boy’s face and squeezed the bag to force air into his lungs, but the
bag-valve mask didn’t help. Unbeknownst to the teams, the bag was broken; it
looked fine, but it supplied no oxygen. By the time most teams figured this out
and replaced the bag, it was too late.
But a few teams did solve
the problem. “The most striking thing about those teams was a pattern — a cycle
— of moving from tasks to monitoring to diagnosis and then back to tasks
again,” Christianson told us.
This cycle starts with
a task, such as intubating the patient. The next step is monitoring:
you check if performing the task had the expected effect. If it didn’t, then
you move onto the next step and come up with a new possible diagnosis.
And then you go back to tasks because you need to do something
— for example, administer medications or replace the bag — to test your new
theory.
But many teams failed to
complete the cycles. “The teams that didn’t do well often had really long
stretches of task talk,” says Christianson. “Or they’d just go task,
monitoring, and back to task. So they never figured it out.”
When dealing with a crisis,
it’s easy to be overwhelmed by tasks. Too often, we just keep our heads down,
focus on the task at hand, and push ahead. Cycling from doing to monitoring to
diagnosing — and then back to doing — is more effective, and practicing this
cycle can help teams prepare for the unexpected.
3.
Know something about everybody else’s job.
Some
teams, such as film crews and SWAT teams, face surprises all the time. If the
layout of a house that a SWAT team enters is different from what the officers
expected, they still press on. When the power goes out at a filming location,
film crews figure out how to resume shooting as soon as possible. How do they
do it?
According to researchers Beth Bechky and Gerardo
Okhuysen, one critical factor that enables these
teams to handle surprises is that members are familiar with everyone else’s
work and understand how their various tasks fit together.
In the film industry, this
knowledge comes from how people progress through their careers. Many rookies
start as production assistants and work on tasks that cut across different
departments, from costumes to lightning and sound. SWAT teams achieve something
similar through cross-training. New officers, for example, need to learn how to
use a sniper’s rifle and scope even if they aren’t planning to become a sniper.
They don’t need to become an expert marksman, but they need to understand what
snipers see and how they work.
This is an unusual
approach; most organizations emphasize deep specialization in one’s work rather
than familiarity with everyone else’s. But cross-training helps teams change
their plans on the fly because it allows team members to shift responsibilities
and step into each other’s roles. It also means that people know how the jobs
of different team members fit into the bigger picture. This gives teams a
better understanding of what kinds of changes to a plan are advisable — or even
possible — when a crisis strikes.
Consider Nasdaq’s fiasco in
light of these lessons. When the trading program and the validation check
didn’t match, Nasdaq managers decided to remove the validation check — the
equivalent of driving around the lowered gates of a railroad crossing. Rather
than stopping, they pushed forward. Rather than going through cycles of doing,
monitoring, and diagnosing, they charged ahead without diagnosis — without
understanding why the validation check stopped the trading.
And rather than knowing something about everybody else’s job, managers knew
very little about the programmers’ work — including how the validation check
was implemented. In fact, before that day, the manager who proposed bypassing
the validation check had never even heard of the validation
check.
There is a better way. Just
a few months before the Facebook IPO, the Kansas-based BATS stock
exchange faced a similar
technical challenge as Nasdaq — managing the real-time
failure of an IPO. When they faced a serious technical error, managers at BATS
took a step back and cancelled the offering. They monitored the situation,
diagnosed the problem, and decided that stopping was the most prudent thing to
do. And though the cancelled IPO was embarrassing, BATS wasn’t censured by
regulators, nor did it cause hundreds of millions of dollars of losses to
investors.
It probably helped that the
CEO of the exchange was a technologist who understood the technical aspects of
the problem. Though the decision to stop ran counter to the original plan, it
prevented the kind of knock-on errors that Nasdaq caused when it charged into
the unknown.
https://hbr.org/2018/03/how-to-prepare-for-a-crisis-you-couldnt-possibly-predict
No comments:
Post a Comment