Leadership and behavior:
Mastering the mechanics of reason and emotion
A Nobel
Prize winner and a leading behavioral economist offer common sense and
counterintuitive insights on performance, collaboration, and innovation.
The confluence of economics, psychology, game theory, and neuroscience has
opened new vistas—not just on how people think and behave, but also on how
organizations function. Over the past two decades, academic insight and
real-world experience have demonstrated, beyond much doubt, that when companies
channel their competitive and collaborative instincts, embrace diversity, and
recognize the needs and emotions of their employees, they can reap dividends in
performance.
The pioneering work of Nobel laureate and
Harvard professor Eric Maskin in mechanism design theory represents one
powerful application. Combining game theory, behavioral economics, and
engineering, his ideas help an organization’s leaders choose a desired result
and then design game-like rules that can realize it by taking into account how
different independently acting, intelligent people will behave. The work of
Hebrew University professor Eyal Winter challenges and advances our understanding of what
“intelligence” really means. In his latest book, Feeling Smart: Why Our Emotions Are More Rational Than We
Think , Winter shows that although
emotions are thought to be at odds with rationality, they’re actually a key
factor in rational decision making.
In this discussion, led by McKinsey
partner Julia Sperling, a medical doctor and neuroscientist by training, and
McKinsey Publishing’s David Schwartz, Maskin and Winter explore some of the
implications of their work for leaders of all stripes.
The Quarterly: Should CEOs feel badly about following their gut
or at least listening to their intuition?
Eyal Winter: A CEO should be aware that whenever we make an
important decision, we invoke rationality and emotion at the same time. For
instance, when we are affected by empathy, we are more capable of recognizing
things that are hidden from us than if we try to use pure rationality. And, of
course, understanding the motives and the feelings of other parties is crucial
to engaging effectively in strategic and interactive situations.
Eric Maskin: I fully agree with Eyal, but I want to introduce a
qualification: our emotions can be a powerful guide to decision making, and in
fact they evolved for that purpose. But it’s not always the case that the
situation that we find ourselves in is well matched to the situation that our
emotions have evolved for. For example, we may have a negative emotional
reaction on meeting people who, at least superficially, seem very different
from us—“fear of the other.” This emotion evolved for a good purpose; in a
tribal world, other tribes posed a threat. But that kind of emotion can get in the
way of interactions today. It introduces immediate hostility when there
shouldn’t be hostility.
The Quarterly: That really matters for diversity.
Eyal Winter: One of the most important aspects of this
interaction is that rationality allows us to analyze our emotions and gives us
answers to the question of why we feel a certain way. And it allows us to be
critical when we’re judging our own emotions.
People have a perception about decision
making, as if we have two boxes in the brain. One is telling the other that
it’s irrational, these two boxes are fighting over time, one is prevailing—and
then we make decisions based on the prevailing side, or we shut down one of
these boxes and make decisions based on the other one only. This is a very
wrong way of describing how people make decisions. There is hardly any decision
that we take that does not involve the two things together. Actually, there’s a
lot of deliberation between rationality and emotion. And we also know that the
types of decisions that invoke perhaps the most intensive collaboration between
rationality and emotions are ethical or moral considerations. As a
neuroscientist, you know that one of the more important pieces of scientific
evidence for this is that much of this interaction takes place in the part of
the brain called the prefrontal cortex. When we confront people with ethical
issues, this part of the brain, the prefrontal cortex, is doing a lot of work.
The Quarterly: Yes, and we can track this with imaging
techniques. Indeed, neuroscientists keep fighting back when people try too
quickly to take insights from their area of science into business, and come up
with this idea of a “left-” and “right-brain” person, and exactly the boxes
that you are mentioning. Given your earlier comments, do you believe we are
capable in a situation where we are emotional, to actually step back, look at
ourselves, realize that we are acting in an emotional way—and that this
behavior might be either appropriate or not appropriate?
Eyal Winter: I think we are capable of doing it, and we are
doing it to some extent. Some people do it better, some people have more
difficulty. But just imagine what would have happened if we couldn’t have done
it? We probably wouldn’t have managed, in terms of evolution. I think that the
mere fact that we still exist, you and me, shows that we have some capability
of controlling our emotions.
Eric Maskin: In fact, one interesting empirical trend that we
observe through the centuries is a decline of violence, or at least violence on
a per capita basis. The world is a much less dangerous place now than it was
100 years ago. The contrast is even bigger when we go back longer periods of
time. And this is largely because of our ability over time to develop, first,
an awareness of our hostile inclinations, but more important to build in
mechanisms which protect us from those inclinations.
The Quarterly: Can you speak more about mechanism design—how
important it is that systems help the individual or groups to act in ways that
are desirable?
Eric Maskin: Mechanism design recognizes the fact that there’s
often a tension between what is good for the individual, that is, an
individual’s objectives, and what is good for society—society’s objectives. And
the point of mechanism design is to modify or create institutions that help
bring those conflicting objectives into line, even when critical information
about the situation is missing.
An example that I like to use is the
problem of cutting a cake. A cake is to be divided between two children, Bob
and Alice. Bob and Alice’s objectives are each to get as much cake as possible.
But you, as the parent—as “society”—are interested in making sure that the
division is fair, that Bob thinks his piece is at least as big as Alice’s, and
Alice thinks her piece is at least as big as Bob’s. Is there a mechanism, a
procedure, you can use that will result in a fair division, even when you have
no information about how the children themselves see the cake?
Well, it turns out that there’s a very
simple and well-known mechanism to solve this problem, called the “divide and
choose” procedure. You let one of the children, say, Bob, do the cutting, but
then allow the other, Alice, to choose which piece she takes for herself. The
reason why this works is that it exploits Bob’s objective to get as much cake
as possible. When he’s cutting the cake, he will make sure that, from his point
of view, the two pieces are exactly equal because he knows that if they’re not,
Alice will take the bigger one. The mechanism is an example of how you can
reconcile two seemingly conflicting objectives even when you have no idea what
the participants themselves consider to be equal pieces.
The Quarterly: How has mechanism theory been applied by leaders
or organizations?
Eric Maskin: It’s found applications in many areas, including
within companies. Say that you’re a CEO and you want to motivate your employees
to work hard for the company, but you’re missing some critical information. In
particular, you can’t actually observe directly what the employees are doing.
You can observe the outcomes of their actions—sales or revenues—but the
outcomes may not correlate perfectly with the inputs—their efforts—because
other factors besides employees’ efforts may be involved. The problem for the
CEO, then, is how do you reward your employees for performance when you cannot
observe inputs directly?
Eyal Winter: Here’s an example: Continental Airlines was on the
verge of bankruptcy in the mid-’90s. And an important reason was very bad
on-time performance—it caused passengers to leave the company. Continental was
thinking both about the incentives for the individuals and, more importantly,
about on-time performance. It’s a “weak link” type of technology. If one worker
stalls, the entire process is stopped because it’s a sequential process, where
everybody’s dependent on everybody else.
What they came up with was the “go
forward” plan, which offered every employee in the company a $65 bonus for
every month in which the company ranking on on-time performance was in the top
five. Just $65, from the cleaners up to the CEO. It sounds ridiculous, because
$65 a month seems not enough money to incentivize people to work hard, but it
worked perfectly.
The main reason was that Continental
recognized that there’s an aspect to incentives which is not necessarily about
money. In this case, shirking would mean that you lose your own bonus of $65,
but it would also mean that you will be in a situation in which you will feel
you are causing damage to thousands of employees that didn’t receive a bonus
that month because you stalled. It was the understanding that incentives can be
also social, emotional, and moral that made this mechanism design work
perfectly.
Eric Maskin: A related technique is to make employees
shareholders in the company. You might think that in a very large company an
individual employee’s effect on the share price might be pretty small—but as
Eyal said, there’s an emotional impact too. An employee’s identity is tied to
this company in a way that it wouldn’t be if she were receiving a straight
salary. And empirical studies by the labor economist Richard Freeman and others
show that even large companies making use of employee ownership have higher
productivity.
The Quarterly: How would you advise leaders to facilitate group
collaborations, especially in organizations where people feel strong individual
ownership?
Eyal Winter: It’s again very much about incentives. One has to
find the right balance between joint interest and individual interest. For
example, businesses can overemphasize the role of individual bonuses. Bonuses
can be counterproductive when they generate aggressive competition in a way
which is not healthy to the organization.
There are interesting papers about team
behavior, and we know that bonuses for combined individuals, or bonus schemes
that combine some individual points with some collective points, or which
depend on group behavior as a whole, often work much more effectively than
individual bonuses alone. The balance between competition and cooperation is
something that CEOs and managers have to think deeply about, by opting for the
right mechanism.
The Quarterly: Can mechanisms that encourage collaboration also
be used to foster innovation?
Eric Maskin: Collaboration is a powerful tool for speeding up
innovation, because innovation is all about ideas. If you have an idea and I
have an idea, then if we’re collaborating we can develop the better idea and
ignore the worse idea. But if we’re working alone, then the worse idea doesn’t
get discarded, and that slows down innovation.
Collaboration in academic research shows
an interesting trend. If you look at the list of papers published in economics
journals 30 years ago, you’ll find that most of them were single-authored. Now
the overwhelming majority of such papers, probably 80 percent or more, are
multiauthored. And there’s a very good reason for that trend: in collaborative
research, the whole is more than the sum of the parts because only the best
ideas get used.
Eyal Winter: There’s another aspect in the working environment
which is conducive to innovation. And that is whether the organization will be
open to risk taking by employees. If you’re coming with an innovative idea, not
a standard idea, there is a much greater risk that nothing will come out of it
eventually. If people work in an environment which is not open for taking
risks, or alternatively in which they have to fight for survival within their
organization, they will be much less prone to take the risks that will lead to
innovation.
The Quarterly: What about innovation in a world of vast amounts
of data and advanced analytics at our fingertips? Is there untapped potential
here for behavioral economics?
Eric Maskin: One exciting direction is randomized field
experiments. Up until now, most experiments in behavioral economics have been
done in the lab. That is, you put people in an artificial setting, the
laboratory, and you see how they behave. But when you do that, you always worry
about whether your insights apply to the real world.
And this is where randomized field
experiments come in. Now you follow people in their actual lives, rather than
putting them in the lab. That gives you less control over the factors
influencing behavior than you have in the lab. But that’s where big data help.
If you have large enough data sets—millions or billions of pieces of
information—then the lack of control is no longer as important a concern. Big
data sets help compensate for the messiness of real-life behavior.
The Quarterly: Big data analytics is also tapping into artificial
intelligence. But can a computer be programmed to reason morally, as people
do—and how might that play out?
Eyal Winter: I think there will be a huge advancement in AI.
But I don’t believe that it will replace perfectly or completely the
interaction between human beings. People will still have to meet and discuss
things, even with machines.
Eric Maskin: Humans
are instinctively moral beings and I don’t see machines as ever entirely
replacing those instincts. Computers are powerful complements to moral
reasoning, not substitutes for it.
http://www.mckinsey.com/business-functions/organization/our-insights/leadership-and-behavior-mastering-the-mechanics-of-reason-and-emotion?cid=other-alt-mkq-mck-oth-1612
No comments:
Post a Comment