Debiasing the corporation: An interview with Nobel laureate Richard Thaler
The
University of Chicago professor explains how executives can battle back against
biases that can affect their decision making.
Whether standing at the front of a lecture hall at the University of Chicago or
sharing a Hollywood soundstage with Selena Gomez, Professor Richard H. Thaler
has made it his life’s work to understand and explain the biases that get in
the way of good decision making.
In 2017, he was awarded the Nobel Prize
for four decades of research that incorporates human psychology and social
science into economic analysis. Through his lectures, writings, and even a
cameo in the feature film The Big Short, Thaler introduced
economists, policy makers, business leaders, and consumers to phrases like
“mental accounting” and “nudging”—concepts that explain why individuals and
organizations sometimes act against their own best interests and how they can
challenge assumptions and change behaviors.
In this edited interview with McKinsey’s
Bill Javetski and Tim Koller, Thaler considers how business leaders can apply
principles of behavioral economics and behavioral finance when allocating
resources, generating forecasts, or otherwise making hard choices in uncertain
business situations.
Write
stuff down
Play Vide
One of the big problems that companies
have, in getting people to take risk, is something called hindsight bias—that
after the fact, people all think they knew it all along. So if you ask people
now, did they think it was plausible that we would have an African-American
president before a woman president, they say, “Yeah, that could happen.”
All you needed was the right candidate to
come along. Obviously, one happened to come along. But, of course, a decade ago
no one thought that that was more likely. So, we’re all geniuses after the
fact. Here in America we call it Monday-morning quarterbacking.
One of the problems is CEOs exacerbate
this problem. Because they have hindsight bias. When a good decision
happens—good meaning ex ante, or before it gets played out—the CEO will say,
“Yeah, great. Let’s go for that gamble. That looks good.”
Two years later, or five years later, when
things have played out, and it turns out that a competitor came up with a better
version of the same product that we all thought was a great idea, then the CEO
is going to remember, “I never really liked this idea.”
One suggestion I make to my students, and
I make this suggestion about a lot of things, so this may come up more than
once in this conversation, is “write stuff down.” I have a colleague who says,
“If you don’t write it down, it never happened.”
What does writing stuff down do? I
encourage my students, when they’re dealing with their boss—be it the CEO or
whatever—on a big decision, not whether to buy this kind of computer or that
one but career-building or -ending decisions, to first, get some agreement on
the goals, what are we trying to achieve here, the assumptions of why we are
going to try this risky gamble, risky investment. We wouldn’t want to call it a
gamble. Essentially memorialize the fact that the CEO and the other people that
have approved this decision all have the same assumptions, that no competitor
has a similar product in the pipeline, that we don’t expect a major financial
crisis.
You can imagine all kinds of good
decisions taken in 2005 were evaluated five years later as stupid. They weren’t
stupid. They were unlucky. So any company that can learn to distinguish between
bad decisions and bad outcomes has a leg up.
Forecasting
follies
Play Vide
We’re doing this interview in midtown New
York, and it’s reminding me of an old story. Amos Tversky, Danny Kahneman, and
I were here visiting the head of a large investment company that both managed
money and made earnings forecasts.
We had a suggestion for them. Their
earnings forecasts are always a single number: “This company will make $2.76
next year.” We said, “Why don’t you give confidence limits: it’ll be between
$2.50 and $3.00—80 percent of the time.”
They just dropped that idea very quickly.
We said, “Look, we understand why you wouldn’t want to do this publicly. Why
don’t you do it internally?”
Duke does a survey of CFOs, I think, every
quarter. One of the questions they ask them is a forecast of the return on the
S&P 500 for the next 12 months. They ask for 80 percent confidence limits.
The outcome should lie between their high and low estimate 80 percent of the
time. Over the decade that they’ve been doing this, the outcome occurred within
their limits a third of the time, not 80 percent of the time.
The reason is their confidence limits are
way too narrow. There was an entire period leading up to the financial crisis
where the median low estimate, the worst-case scenario, was zero. That’s
hopelessly optimistic. We asked the authors, “If you know nothing, what would a
rational forecast look like, based on historical numbers?”
It would be plus 30 percent on the upside,
minus 10 percent on the downside. If you did that, you’d be right 80 percent of
the time—80 percent of the outcomes would occur in your range. But, think about
what an idiot you would look like. Really? That’s your forecast? Somewhere
between plus 30 and minus ten? It makes you look like an idiot.
It turns out it just makes you look like
you have no ability to forecast the stock market, which they don’t; nor does
anyone else. So providing numbers that make you look like an idiot is accurate.
Write stuff down. Anybody that’s making repeated forecasts, there should be a
record. If you have a record, then you can go back. This takes some patience.
But keeping track will bring people down to earth.
Nudging
the corporation
The organizing principle of nudge is
something we call choice architecture. Choice architecture is something that
can apply in any company. How are we framing the options for people? How is
that influencing the choices that they make? It can go anywhere from the
mainstream ideas of nudge, so, say, it might involve making employees
healthier.
One of the nice things about our (I call
it) new building at Chicago Booth—I think it must be getting close to 15 years
old, but to us it’s still a new building—one of the things the architect did
was the faculty is divided across three floors: third, fourth, and fifth
floors.
There are open stairwells that connect
those floors. It does two things. One is it gives people a little more
exercise. Because those stairs are very inviting, in a way that the stairwells
that serve as fire exits are just the opposite.
Also it makes us feel more connected. You
can hear people. I’m on the fourth floor, so in the middle. If I walk down the
hall, I may have a chance encounter not just with the people on my floor but
even with people on the adjacent floors. Because I’ll hear somebody’s voice,
and I wanted to go talk to that guy.
There are lots of ways you can design
buildings that will make people healthier and make them walk more. I wrote a
little column about this in the New York Times, about nudging
people by making stuff fun. There was a guy in LA [Los Angeles] who wrote to me
and said that they took this seriously.
They didn’t have an open stairwell in
their building, but they made the stairwell that they did have more inviting.
They put in music and gave everybody two songs they could nominate. They put in
blackboards where people could put decorations and funny notes. I was reading
something recently about another building that’s taken this idea.
Since you have to use a card to get in and
out of the doors, they can keep track of who’s going in and out. So they can
give you feedback on your phone or your Fitbit, of how many steps you’ve done
in the stairwells. But the same is true for every decision that the firm is
making.
On
diversity
There’s lots of talk about diversity these
days. We tend to think about that in terms of things like racial diversity and
gender diversity and ethnic diversity. Those things are all important. But it’s
also important to have diversity in how people think.
When I came to Chicago in 1995, they asked
me to help build up a behavioral-science group. At the time, I was one of two
senior faculty members. The group was teetering on the edge of extinction.
We’re up close to 20 now. As we’ve been growing, I’ve been nudging my
colleagues.
Sometimes we’ll see a candidate and we’ll
say, “That guy doesn’t seem like us.” They don’t mean that personally. They
mean that the research is different from the research we do. Of course, there
is a limit. We don’t want to hire somebody studying astrophysics in a
behavioral-science department. Though we could use the IQ boost. But I keep
saying, “No, we want to hire people that think differently from how we do,
especially junior hires. Because we want to take risks.” That’s the place to
take risks. That person does things that are a little different from us.
Either that candidate will convince us
that that research is worthwhile to us, or will maybe come closer to what we
do, or none of the above, and he or she will leave and go somewhere else. None
of those are terrible outcomes. But you go into a lot of companies where
everybody looks the same and they all went to the same schools. They all think
the same way. And you don’t learn.
There’s a quote—I may garble it—from
Alfred P. Sloan, the founder of GM, ending some meeting, saying something like,
“We seem to be all in agreement here, so I suggest we adjourn and reconvene in
a week, when people have had time to think about other ideas and what might be
wrong with this.”
I think strong leaders, who are
self-confident and secure, who are comfortable in their skin and their place,
will welcome alternative points of view. The insecure ones won’t, and it’s a
recipe for disaster. You want to be in an organization where somebody will tell
the boss before the boss is about to do something stupid.
Figure out ways to give people feedback,
write it down, and don’t let the boss think that he or she knows it all. Figure
out a way of debiasing the boss. That’s everybody’s job. You’d like it to be
the boss’s job, but some bosses are not very good at it.
Making
better decisions through technology
Play Video
We’re just scratching the surface on what
technology can do. Some applications in the healthcare sector, I think, are
going to be completely game changing. Take diabetes, for example, a major cause
of illness and expense. [For type 2 diabetes], most of the problem is people
don’t take their medicine.
If they improved their diet and took their
medicine, most of their problems would go away. We basically now have the
technology to insert something in your body that will constantly measure your
blood sugar and administer the appropriate drugs. Boom, we don’t have a
compliance problem anymore, at least on the drug side.
There’s lots of fear about artificial
intelligence. I tend to be optimistic. We don’t have to look into the future to
see the way in which technology can help us make better decisions. If you think
about how banks decide whom to give a credit card and how much credit to give
them, that’s been done using a simple model for, I think, 30 years at least.
What I can see is the so-called Moneyball revolution
in sports—which is gradually creeping into every sport—is making less progress
in the human-resources side than it should. I think that’s the place where we
could see the biggest changes over the next decade.
Because job interviews are, to a first
approximation, useless—at least the traditional ones, where they ask you things
like, “What do you see yourself doing in ten years, or what’s your biggest
weakness?” “Oh, I’m too honest. I work too hard. Those are my two biggest
weaknesses.”
So-called structured interviews can be
better, but we’re trying to change the chitchat into a test, to whatever extent
you can do that. We wouldn’t hire a race-car driver by giving them an
interview. We’d put them in a car, or better yet, because it would be cheaper,
behind a video game and see how they drive.
It’s harder to see how people make
decisions. But there’s one trading company I used to know pretty well. They
would recruit the smartest people they could find right out of school. They
didn’t care if they knew anything about options. But they would get them to bet
on everything, and amounts of money that, for the kids, would be enough that
they would think about it. So there’s a sporting event tonight, and they’d all
have bets on it. What were they trying to do? They were trying to teach them
what it feels like to size up a bet, what it feels like to lose and win. This was
part of the training and part of the evaluation.
That was the job they were learning how to
do, how to be traders. Now that job probably doesn’t exist anymore, but there’s
some other job that exists. Figure out a way of mimicking some aspects of that,
and test it, and get rid of the chitchat. Because all that tells you is whether
you’re going to like the person, which may be important if it’s somebody you’re
going to be working with day and night. If a doctor is hiring a nurse that’s
going to work in a small office, it’s important that you get along. But if
you’re hiring somebody that’s going to come to work in a big, global company,
the chance that the person interviewing that candidate will work with that
candidate is infinitesimal. So we don’t really care what the interviewer thinks
of the interviewee. We care whether the interviewee will add something to the
organization.
By Bill Javetski and Tim Koller
https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/debiasing-the-corporation-an-interview-with-nobel-laureate-richard-thaler?cid=other-eml-alt-mkq-mck-oth-1805&hlkid=03ded743a9b34a3a814d6c1259f7bd66&hctky=10339950&hdpid=5ef3e926-582d-4f83-a0a6-2312a64eb8f6
No comments:
Post a Comment