What Happens When Machines Know More
About People than People Do?
One of
the most controversial psychological studies in recent memory appeared last
month as an advance release of a paper that will be published in the Journal of
Personality and Social Psychology.
Yilun Wang and Michal Kosinsky, both of the Graduate School of Business at
Stanford University, used a deep neural network (a computer program that mimics
complex neural interactions in the human brain) to analyze photographs of faces
taken from a dating website and detect the sexual orientation of the people
whose images were shown. The algorithm correctly distinguished between straight
and gay men 81 percent of the time. When it had five photos of the same person
to analyze, the accuracy rate rose to 91 percent. For women, the score was
lower: 71 percent and 83 percent, respectively. But the algorithm scored much
higher than its human counterparts, who guessed correctly, based on a single
image, only 61 percent of the time for men and 54 percent for women.
Of course,
methods like this could be used to out people who are closeted, or to falsely
identify them as gay or lesbian. The LGBT advocacy groups GLAAD and the Human
Rights Campaign jointly
condemned the study as inaccurate, pointing out
that it didn’t identify bisexuality and included no nonwhite faces. But as
the Washington Post noted, there are even more fundamental issues at stake.
Repressive governments, intolerant businesses, or blackmailers could use these
indications to target individuals.
The
study also raises other issues besides sexual orientation — issues with at
least as much potential for invasion of privacy and for abuse. Algorithms like
this rely on machine learning. Through repetition and calibration, the computer
programs learn to match their models against reality and continually refine
those models until they have immense predictive accuracy. A program of this
sort could pick up attributes that human beings were absolutely unaware of —
and glean immense amounts of insight about them. A world in which this is
prevalent becomes a world like that of the film Minority Report,
with people continually adjusting themselves to more “normal” behavior because
the systems around them track not only what they have done, but what they might
be capable of doing.
The
Stanford researchers, Wang and Kosinsky, pointed this out in their paper: Algorithms could adopt, and then surpass, the human
capability to “accurately judge others’ character, psychological states, and
demographic traits from their faces,” they wrote. “People also judge, with some
minimal accuracy, others’ political views, honesty, sexual orientation, or even
the likelihood of winning an election.” Although the judgments are not always
accurate — you can’t always judge a website by its home page, after all — this
low accuracy is not due to the lack of cues, but to our general inexperience at
interpreting them. People who really try to learn to read other people often
gain proficiency, and a machine that had nothing else to do — and an infinite
number of images to work with — could probably become extraordinarily proficient.
And what if it weren’t limited to static
pictures of faces? Imagine what statistical correlation could reveal about an
individual from a video — from voice intonation, posture, movement patterns,
the ways in which people respond to one another’s touch, the wrinkles of
noses and the raising of eyebrows, and so on? Suppose it could pick up these
signals from the camera on a laptop, or the microphone on a smartphone? An
algorithm of this sort, analyzing facial expressions and voice intonation, could
track who is happy at work and who is secretly sending out resumes.
Many of these cues would probably be
completely invisible to human awareness, as imperceptible as a dog
whistle. But the sensors and algorithms would surely pick them up. Add to these
behavioral cues, such as ATM withdrawal patterns or website visits, and you
could develop an enormously accurate, and invasive, profile of any individual.
This field of behavior would become, in effect, a second skin: a way of
displaying your behavioral predispositions, more revealing to those who wield
the right software than our existing skin would ever be.
The
Chinese government is reportedly considering a system to monitor how its
citizens behave. There is a pilot project under way in the city of Hangzhou, in
Zhejiang province in East China. “A person can incur black marks for
infractions such as fare cheating, jaywalking, and violating family-planning
rules,” reported the Wall
Street Journal in November 2016. “Algorithms would use
a range of data to calculate a citizen’s rating, which would then be used to
determine all manner of activities, such as who gets loans, or faster treatment
at government offices, or access to luxury hotels.” Implementing this system
across a country of 1.4 billion people would, as the Journal noted,
be an immense and probably impossible task; but even if it’s applied only
locally at first, like all machine learning systems, its prowess would increase
over time.
Machine learning has the potential to reveal
much more simply through correlation of these observational details with other
studies of human behavior. Are you somewhere on the autism spectrum? Are you
susceptible to being bullied, or prone to bullying others? Do you have a
potential gambling addiction, even if you’ve never gambled? Did your parents
abandon you? Do your children get in trouble easily? Do you have a high or low
sex drive? Do you pretend to be extroverted when you’re really an introvert (or
vice versa)? Do you have some personality quirks that correlate at your company
with those of high-potential employees — or with employees who have been laid
off? Traits like these could be revealed to your company, your government, or
even your acquaintances — without your knowledge that others have been
informed about them, or even that they exist.
I’m
reminded of a comment made by the late management thinker Elliott Jaques in 2001. His research into hierarchies and employee
capabilities, which is still unparalleled in my view, had led him to recognize
that peoples’ appropriate positions in an organization depend on their
cognitive capacity: The more complexity their minds could handle, the higher
they should rise. He had a way of detecting cognitive complexity, based on
watching a video of someone speak, analyzing the way he or she put words
together, and assigning that person to a “stratum” that should correspond to
their level in the hierarchy.
“You can analyze someone by looking at 15
minutes of videotape of them,” he told me. “And you can train someone to do the
analysis in a few hours.” But he refused to make the test and training publicly
available. "There are too damn many consultants around who would go along
to firms and say, ‘We can evaluate all your people.’ Then the subordinates
would have to face sessions where the boss says, ‘The psychologist tells me
you’re a Stratum II,’ and I’m not having it.”
The days when someone like Dr. Jaques could
say no are gone. Sooner rather than later, all of us will be susceptible to
machine-driven analysis. This will not just force us to think differently about
privacy. It will raise, to everyone’s minds, the question of what it means to
be human. Are we the sum of our traits? If so, are we capable of change? And if
those traits change, will that enter the considerations of those who captured
data about us at an earlier time?
Finally,
will we individuals have access to the feedback about us — to finally be able,
for example, to see ourselves as others see us? Or will these analyses be used
as a vehicle for control, and who will the controllers be? These questions
haven’t yet been answered because people have only begun to ask them in the
context of real-world technological change. There are a few places where
answers are developing in the regulatory sphere (for example, the European
Union’s new General Data
Protection Regulation, or GDPR, which goes into effect in May
2018). There are bound to be rules governing how much data companies can hold,
and establishing legal boundaries for information misuse; our firm, PwC, has
an emerging
practice helping companies understand the laws and
comply. But formal rules will only go so far, and will inevitably vary from one
country to another. We also need to figure out cultural values, starting
with forgiveness. If everything about people is knowable, then much more
diverse behavior will have to be tolerated.
In politics, this is already happening;
elected government officials will be able to keep fewer and fewer secrets in
the years to come. For the rest of us, the first testing ground will probably
be the workplace, which is also where people generally try to present the best
side of themselves, for the sake of their livelihood and reputation.
There will be great benefit in the
understanding revealed by our second skin; a lot more will be
learned about human behavior, organizational dynamics, and probably the
health effects of habits. But if you’re alarmed, that’s also right. Each of us
has a secret or two that we would like to keep from others. Often, it’s not
something we’ve done — it’s only something we’ve considered doing, or
might be drawn to do if we didn’t hold ourselves back. When our second skin,
the envelope of our behavior, is visible to the machines around us, then those
predispositions are no longer secret — at least, not to the machines. And thus
they become part of our external persona, our reputation, and even our work life,
whether we like it or not.
Art Kleiner
https://www.strategy-business.com/blog/What-Happens-When-Machines-Know-More-About-People-than-People-Do?gko=7a4c0&utm_source=itw&utm_medium=20171019&utm_campaign=resp
No comments:
Post a Comment