The promise and challenge of the age of artificial intelligence PART I
AI
promises considerable economic benefits, even as it disrupts the world of
work. These three priorities will help achieve good outcomes.
The time may have finally come for artificial intelligence (AI) after periods of
hype followed by several “AI winters” over the past 60 years. AI now powers so
many real-world applications, ranging from facial recognition to language
translators and assistants like Siri and Alexa, that we barely notice it. Along
with these consumer applications, companies across sectors are increasingly
harnessing AI’s power in their operations. Embracing AI promises considerable
benefits for businesses and economies through its contributions to productivity
growth and innovation. At the same time, AI’s impact on work is likely to be
profound. Some occupations as well as demand for some skills will decline,
while others grow and many change as people work alongside ever-evolving and
increasingly capable machines.
This briefing pulls
together various strands of research by the McKinsey Global Institute into AI
technologies and their uses, limitations, and impact. It was compiled for
the Tallinn Digital
Summit that took place
in October 2018. The briefing concludes with a set of issues that policy makers
and business leaders will need to address to soften the disruptive transitions
likely to accompany its adoption.
1.
AI’s time may have finally come,
but more progress is needed
The term “artificial intelligence” was popularized at a conference at
Dartmouth College in the United
States in 1956 that brought together researchers on a broad range of topics,
from language simulation to learning machines.
Despite periods of
significant scientific advances in the six decades since, AI has often failed
to live up to the hype that surrounded it. Decades were spent trying to
describe human intelligence precisely, and the progress made did not deliver on
the earlier excitement. Since the late 1990s, however, technological progress
has gathered pace, especially in the past decade. Machine-learning algorithms
have progressed, especially through the development of deep learning and
reinforcement-learning techniques based on neural networks.
Several other factors
have contributed to the recent progress. Exponentially more computing capacity
has become available to train larger and more complex models; this has come
through silicon-level innovation including the use of graphics processing units
and tensor processing units, with more on the way. This capacity is being
aggregated in hyperscale clusters, increasingly being made accessible to users
through the cloud.
Another key factor is
the massive amounts of data being generated and now available to train AI
algorithms. Some of the progress in AI has been the result of system-level innovations. Autonomous vehicles are
a good illustration of this: they take advantage of innovations in sensors,
LIDAR, machine vision, mapping and satellite technology, navigation algorithms,
and robotics all brought together in integrated systems.
Despite the progress,
many hard problems remain that will require more scientific breakthroughs. So
far, most of the progress has been in what is often referred to as “narrow
AI”—where machine-learning techniques are being developed to solve specific
problems, for example, in natural language processing. The harder issues are in
what is usually referred to as “artificial general intelligence,” where the
challenge is to develop AI that can tackle general problems in much the same
way that humans can. Many researchers consider this to be decades away from
becoming reality.
Deep learning and machine-learning techniques
are driving AI
Much of the recent
excitement about AI has been the result of advances in the field known as deep learning, a set of techniques
to implement machine learning that is based on artificial neural networks.
These AI systems loosely model the way that neurons interact in the brain.
Neural networks have many (“deep”) layers of simulated interconnected neurons,
hence the term “deep learning.” Whereas earlier neural networks had only three
to five layers and dozens of neurons, deep learning networks can have ten or
more layers, with simulated neurons numbering in the millions.
How data and AI add value to businesses
Artificial intelligence will
transform many companies and create completely new types of businesses. The
cofounder of Coursera, AI Fund, and Landing.AI shares how businesses can
benefit.
There are several types
of machine learning: supervised learning, unsupervised learning, and
reinforcement learning, with each best suited to certain use cases. Most
current practical examples of AI are applications of supervised learning. In
supervised learning, often used when labeled data are available and the
preferred output variables are known, training data are used to help a system
learn the relationship of given inputs to a given output—for example, to
recognize objects in an image or to transcribe human speech.
Unsupervised learning
is a set of techniques used without labeled training data—for example, to
detect clusters or patterns, such as images of buildings that have similar
architectural styles, in a set of existing data.
In reinforcement
learning, systems are trained by receiving virtual “rewards” or “punishments,”
often through a scoring system, essentially learning by trial and error.
Through ongoing work, these techniques are evolving.
Limitations remain, although new techniques
show promise
AI still faces many practical challenges, though new techniques are
emerging to address them. Machine learning can require large amounts of human
effort to label the training data necessary for supervised learning. In-stream
supervision, in which data can be labeled in the course of natural usage, and
other techniques could help alleviate this issue.
Obtaining data sets
that are sufficiently large and comprehensive to be used for training—for
example, creating or obtaining sufficient clinical-trial data to predict
healthcare treatment outcomes more accurately—is also often challenging.
The “black box”
complexity of deep learning techniques creates the challenge of
“explainability,” or showing which factors led to a decision or prediction, and
how. This is particularly important in applications where trust matters and
predictions carry societal implications, as in criminal justice applications or
financial lending. Some nascent approaches, including local interpretable
model-agnostic explanations (LIME), aim to increase model transparency.
Another challenge is
that of building generalized learning techniques, since AI techniques continue
to have difficulties in carrying their experiences from one set of
circumstances to another. Transfer learning, in which an AI model is trained to
accomplish a certain task and then quickly applies that learning to a similar
but distinct activity, is one promising response to this challenge.
2.
Businesses stand to benefit
from AI
While AI is
increasingly pervasive in consumer applications, businesses are beginning to
adopt it across their operations, at times with striking results.
AI’s potential cuts across industries and
functions
AI can be used to improve business performance in areas including predictive
maintenance, where deep learning’s ability to analyze large amounts of
high-dimensional data from audio and images can effectively detect anomalies in
factory assembly lines or aircraft engines. In logistics, AI can optimize
routing of delivery traffic, improving fuel efficiency and reducing delivery
times. In customer service management, AI has become a valuable tool in call
centers, thanks to improved speech recognition. In sales, combining customer
demographic and past transaction data with social media monitoring can help
generate individualized “next product to buy” recommendations, which many
retailers now use routinely.
Such practical AI use
cases and applications can be found across all sectors of the economy and multiple business functions, from marketing to supply chain
operations. In many of these use cases, deep learning techniques primarily add
value by improving on traditional analytics techniques.
Our analysis of more
than 400 use cases across 19 industries and nine business functions found
that AI improved on traditional analytics techniques in 69 percent of
potential use cases. In only 16 percent of
AI use cases did we find a “greenfield” AI solution that was applicable where
other analytics methods would not be effective. Our research estimated that
deep learning techniques based on artificial neural networks could generate as
much as 40 percent of the total potential value that all analytics techniques
could provide by 2030. Further, we estimate that several of the deep learning
techniques could enable up to $6 trillion in value annually.
So far, adoption is uneven across companies
and sectors
Although many
organizations have begun to adopt AI, the pace and extent of adoption has been
uneven. Nearly half of respondents in a 2018 McKinsey survey on AI adoption say
their companies have embedded at least one AI capability in their business
processes, and another 30 percent are piloting AI. Still, only 21 percent say their organizations have embedded AI in several parts
of the business, and barely 3 percent
of large firms have integrated AI across their full enterprise workflows.
Other surveys show that
early AI adopters tend to think about these technologies more expansively, to
grow their markets or increase market share, while companies with less
experience focus more narrowly on reducing costs. Highly digitized companies
tend to invest more in AI and derive greater value from its use.
At the sector level,
the gap between digitized early adopters and others is widening. Sectors highly
ranked in MGI’s Industry Digitization Index, such as high tech and telecommunications, and
financial services are leading AI adopters and have the most ambitious AI
investment plans. As these firms expand AI adoption and acquire more data and
AI capabilities, laggards may find it harder to catch up.
Several challenges to adoption persist
Many companies and
sectors lag in AI adoption. Developing an AI strategy with clearly defined benefits, finding talent with the
appropriate skill sets, overcoming functional silos that constrain end-to-end
deployment, and lacking ownership and commitment to AI on the part of leaders
are among the barriers to adoption most often cited by executives.
On the strategy side,
companies will need to develop an enterprise-wide view of compelling AI
opportunities, potentially transforming parts of their current business
processes. Organizations will need robust data capture and governance processes
as well as modern digital capabilities, and be able to build or access the
requisite infrastructure. Even more challenging will be overcoming the “last
mile” problem of making sure that the superior insights provided by AI are inculcated into the behavior of the people and
processes of an enterprise.
On the talent front,
much of the construction and optimization of deep neural networks remains an
art requiring real expertise. Demand for these skills far outstrips supply;
according to some estimates, fewer than 10,000 people have the skills necessary to tackle serious AI
problems, and competition for them is fierce. Companies considering the option
of building their own AI solutions will need to consider whether they have the
capacity to attract and retain workers with these specialized skills.
CONTINUES IN PART II
No comments:
Post a Comment