Daniel Kahneman’s Strategy for How Your Firm Can Think Smarter
Nobel economics laureate and psychologist Daniel Kahneman —
considered the father of behavioral economics – retired from his teaching
position at Princeton a few years ago to co-found a consulting firm in New
York. In a talk at the recent Wharton People Analytics
Conference, he said of his consulting experience that he had “expected to
be awed” by the quality of the decision-making in organizations “that need to
make profits to survive in a competitive world.”
“I have not been awed,” he stated.
“You look at large organizations that are supposed to be
optimal, rational. And the amount of folly in the way these places are run, the
stupid procedures that they have, the really, really poor thinking you see all
around you, is actually fairly troubling,” he said, noting that there is much
that could be improved.
Figuring out how to make the act of decision-making
“commensurate with the complexity and importance of the stakes” is a huge
problem, in Kahneman’s view, to which the business world does not devote much
thought. At the conference he described how significant progress can be made in
making organizations “more intelligent.”
The Problem with People
If individuals routinely make poor decisions as Kahneman says,
why is that the case? The answer lies in behavioral economics, a field which
explains why people often make irrational financial choices and don’t always
behave the way standard economic models predict. (Kahneman explained much of
his work in his much-lauded 2011 international bestseller Thinking,
Fast and Slow.)
Behavioral economists believe that human beings are unknowingly
hamstrung by overconfidence, limited attention, cognitive biases and other
psychological factors which inevitably cause errors in judgment. These factors
affect everything from how we invest in stocks, to how we respond to marketing
offers, to how we choose which sandwich to buy for lunch.
“We’re fundamentally over-confident in the sense that we jump to
conclusions — and to complete, coherent stories — to create interpretations,”
said Kahneman. “So we misunderstand situations, spontaneously and
automatically. And that’s very difficult to control.” Furthermore, he said, much
of human error is not even attributable to a systematic cause, but to “noise.”
“When people think about error, we tend to think about biases…. But in fact, a
lot of the errors that people make is simply noise, in the sense that it’s
random, unpredictable, it cannot be explained.”
He cited some disturbing evidence about the professional
judgment of experts: “You put the same X-ray in front of radiologists, and
about 20% of the time in some experiments they don’t reach the same diagnosis.”
From his consulting work, Kahneman offered an example from a
large financial institution in which loan approvals and insurance company
judgments are routinely made. The decisions, involving hundreds of thousands of
dollars, frequently hinge on the opinion of a single individual. Kahneman
mounted an experiment in which team leaders were asked what percentage they
thought two different professionals’ decisions would vary if they were each
asked to evaluate the same case.
“Many people give the
same [guess]: somewhere between 5% and 10%,” said Kahneman. “But the answer is
between 40% and 60%. It’s an order of magnitude more. It’s completely different
from what everybody expects.” He noted that at the organization in question,
there was “a huge noise problem” of which the leaders were completely unaware.
The problem cannot be chalked up to the relative inexperience of
some employees, according to Kahneman: “What was very surprising, at least in
our experiments, is that experienced professionals were as variable as
novices.”
Could it help matters to have experts arrive at decisions
together, as a group? Even if this were feasible in organizations, which it is
often not, there are pitfalls here as well. Kahneman said that according to
social psychology, when a group of people discusses a case there are “huge
conformity pressures” that lead participants to radically underestimate the
amount of disagreement among them.
In the face of what seem like daunting odds that business
decision-making can be improved, what’s a company to do?
The Cure: Algorithms
Kahneman’s prescription is for organizations to temper human
judgment with “disciplined thinking” through the use of algorithms. The
indications from the research are unequivocal, he said: When it comes to
decision-making, algorithms are superior to people. “Algorithms are noise-free.
People are not,” he said. “When you put some data in front of an algorithm, you
will always get the same response at the other end.”
A good algorithm does not require a massive amount of data, said
Kahneman. (He said this was “a secret not widely known in the financial
industry.”) Let’s say you are evaluating the financial stability of firms, he
said, for example to give them a loan or insure them against financial risk.
His recommendation is to sit down with a committee of people who are
knowledgeable about the situation and make a list of five or six dimensions.
More than eight is probably unnecessary. “If you create good ranking scales on
those dimensions, and give them equal weight, you will typically do just as
well as with a very sophisticated statistical algorithm.” And do just as well —
typically much better — than experts on the average, he added.
Kahneman, who is Israeli-American and now in his 80s, recalled
inventing a prototype of this procedure when, as a young Israeli platoon
commander with a psychology degree, he was asked to set up a new interviewing
system for the army. Though met with some resistance at the time, he said, the
system he designed is actually still in use by the Israeli armed forces.
Kahneman identified six dimensions that could be rated one at a
time, among them punctuality, sociability, conscientiousness, “something called
masculine pride” (he noted that these were interviews for combat units, 60
years ago), and others. “Very important to rate things one at a time,” he
observed. “That way you don’t form a global impression of the person, but a
differentiated impression of [each] topic. It controls what psychologists call
the halo effect.”
The entire selection process was to consist of generating the
six scores and adding them up. When many interviewers complained — one saying
“you’re turning us into robots” — Kahneman said he added a final “global
rating” step as a concession to human intuition: “Then, close your eyes and
think what kind of a soldier this person is going to be. Put down a rating
between 1 and 5.”
When the new interviewing system was validated against actual
performance a few months later, said Kahneman, it turned out that the final
global rating was very accurate. In fact, it was much more accurate than any of
the single dimensional ratings. “But there is a lesson to be learned,” he
stated. Previously, candidates were interviewed with only a
global rating, and “it was worthless.”
“Global rating is very good — and intuition is very good —
provided that you have [first] gone through the exercise of systematically and
independently evaluating, the constituents of the problem,” he explained. “Then
when you close your eyes and generate an intuitive, comprehensive image of the
case, you will actually add information.”
Implementing this type of procedure in any organization can of
course meet with resistance from employees, said Kahneman. “You have to do it
with a light touch, because otherwise people will hate you and not comply. But
if you do it in a way that they view as helping them perform their tasks, it’s
not too bad.” In his experience, he said, if you put effort into guiding people
to look at information in a particular way, they actually find that it helps
them do a good job.
How do company leaders respond when they are first told they
should implement algorithms to guide their experts’ opinions? “Not very well,”
he said. But “when you tell team leaders that there is 50% variability when
they expected 5% or 10%, then they’re willing to take an algorithm.”
Will Artificial Intelligence Replace Human
Intuition?
Kahneman was asked about the growing role of artificial
intelligence (AI) in business thinking: In particular, about powerful
algorithms that are
increasingly performing tasks previously done by
accountants, consultants and managers. His response surprised many in the
audience: “I think I’m quite worried about it.”
His concern is that as AI
becomes more sophisticated, it is moving beyond simply helping humans
achieve disciplined thinking to actually being able to execute professional
judgment on its own. This will be “very threatening to the leaders of
organizations,” he said. “Because once you have decision analysis, anybody can
outguess the leader…. How will this affect the power structure?”
He cited as an eye-opening AI milestone the fact that earlier
this year, a Google computer program beat the world champion of the popular
Asian game Go — who had won 18 international titles — four games out of five.
“Go is supposed to be the example of … an
intuitive game…. The real experts cannot explain exactly how they reach their
conclusions: it’s too complicated.” But, said Kahneman, the Google team had
built software based on 150,000 actual human matches, and the software then
improved by playing the game against itself about 30 million times. That, he
said, is how you end up with a program that is superior to the world champion:
the program had access to more information than one human being possibly could.
“All of this depends on the availability of data: this is how
intuition develops,” he noted. “We develop intuition with the data we collect
in a lifetime. AI will be able to do better. How will we live with that?”
http://knowledge.wharton.upenn.edu/article/nobel-winner-daniel-kahnemans-strategy-firm-can-think-smarter/?utm_source=kw_newsletter&utm_medium=email&utm_campaign=2016-06-09
No comments:
Post a Comment