What AI
can and can’t do (yet) for your business PART I
Artificial
intelligence is a moving target. Here’s how to take better aim.
Artificial intelligence (AI) seems to be everywhere. We experience it at home
and on our phones. Before we know it—if entrepreneurs and business innovators
are to be believed—AI will be in just about every product and service we buy and use. In addition,
its application to business problem solving is growing in leaps and bounds. And at the same
time, concerns about AI’s implications are rising: we worry about the impact of
AI-enabled automation on the workplace, employment, and society.
A reality sometimes lost amid both the
fears and the headline triumphs, such as Alexa, Siri, and AlphaGo, is that the
AI technologies themselves—namely, machine learning and its subset, deep
learning—have plenty of limitations that will still require considerable effort
to overcome. This is an article about those limitations, aimed at helping
executives better understand what may be holding back their AI efforts. Along
the way, we will also highlight promising advances that are poised to address
some of the limitations and create a new wave of opportunities.
Our perspectives rest on a combination of
work at the front lines—researching, analyzing, and assessing hundreds of
real-world use cases—and our collaborations with some of the thought leaders,
pioneering scientists, and engineers working at the frontiers of AI. We’ve
sought to distill this experience to help executives who often, in our experience,
are exposed only to their own initiatives and not well calibrated as to where
the frontier is or what the pace setters are already doing with AI.
Simply put, AI’s challenges and
limitations are creating a “moving target” problem for leaders: It is hard to
reach a leading edge that’s always advancing. It is also disappointing when AI
efforts run into real-world barriers, which can lessen the appetite for further
investment or encourage a wait-and-see attitude, while others charge ahead. As recent McKinsey Global Institute research indicates, there’s a yawning
divide between leaders and laggards in the
application of AI both across and within sectors .
Executives hoping to narrow the gap must
be able to address AI in an informed way. In other words, they need to
understand not just where AI can boost innovation, insight, and decision
making; lead to revenue growth; and capture of efficiencies—but also where
AI can’t yet provide value. What’s more, they must appreciate
the relationship and distinctions between technical constraints and
organizational ones, such as cultural barriers; a dearth of personnel capable
of building business-ready, AI-powered applications; and the “last mile”
challenge of embedding AI in products and processes. If you want to become a
leader who understands some of the critical technical challenges slowing AI’s
advance and is prepared to exploit promising developments that could overcome
those limitations and potentially bend the trajectory of AI—read on.
Challenges,
limitations, and opportunities
A useful starting point is to understand
recent advances in deep-learning techniques. Arguably the most exciting
developments in AI, these advances are delivering jumps in the accuracy of
classification and prediction, and are doing so without the usual “feature
engineering” associated with traditional supervised learning. Deep learning
uses large-scale neural networks that can contain millions of simulated
“neurons” structured in layers. The most common networks are called
convolutional neural networks (CNNs) and recurrent neural networks (RNNs).
These neural networks learn through the use of training data and
backpropagation algorithms.
While much progress has been made, more
still needs to be done. A critical step is to fit the AI approach to the
problem and the availability of data. Since these systems are “trained” rather
than programmed, the various processes often require huge amounts of labeled
data to perform complex tasks accurately. Obtaining large data sets can be
difficult. In some domains, they may simply not be available,
but even when available, the labeling efforts can require enormous human
resources.
Further, it can be difficult to discern
how a mathematical model trained by deep learning arrives at a particular
prediction, recommendation, or decision. A black box, even one that does what
it’s supposed to, may have limited utility, especially where the predictions or
decisions impact society and hold ramifications that can affect individual
well-being. In such cases, users sometimes need to know the “whys” behind the
workings, such as why an algorithm reached its recommendations—from making
factual findings with legal repercussions to arriving at business decisions,
such as lending, that have regulatory repercussions—and why certain factors (and
not others) were so critical in a given instance.
Let’s explore five interconnected ways in
which these limitations, and the solutions emerging to address them, are
starting to play out.
Limitation 1: Data labeling
Most current AI models are trained through
“supervised learning.” This means that humans must label and categorize the
underlying data, which can be a sizable and error-prone chore. For example,
companies developing self-driving-car technologies are hiring hundreds of
people to manually annotate hours of video feeds from prototype vehicles to
help train these systems. At the same time, promising new techniques are
emerging, such as in-stream supervision (demonstrated by Eric Horvitz and his
colleagues at Microsoft Research), in which data can be labeled in the course
of natural usage.2Unsupervised
or semisupervised approaches reduce the need for large, labeled data sets. Two
promising techniques are reinforcement learning and generative adversarial
networks.
Reinforcement learning.
This unsupervised technique allows
algorithms to learn tasks simply by trial and error. The methodology hearkens
to a “carrot and stick” approach: for every attempt an algorithm makes at
performing a task, it receives a “reward” (such as a higher score) if the
behavior is successful or a “punishment” if it isn’t. With repetition,
performance improves, in many cases surpassing human capabilities—so long as
the learning environment is representative of the real world.
Reinforcement learning has famously been
used in training computers to play games—most recently, in conjunction with
deep-learning techniques. In May 2017, for example, it helped the AI system
AlphaGo to defeat world champion Ke Jie in the game of Go. In another example,
Microsoft has fielded decision services that draw on reinforcement learning and
adapt to user preferences. The potential application of reinforcement learning
cuts across many business arenas. Possibilities include an AI-driven trading
portfolio that acquires or loses points for gains or losses in value,
respectively; a product-recommendation engine that receives points for every
recommendation-driven sale; and truck-routing software that receives a reward
for on-time deliveries or reducing fuel consumption.
Reinforcement learning can also help AI
transcend the natural and social limitations of human labeling by developing
previously unimagined solutions and strategies that even seasoned practitioners
might never have considered. Recently, for example, the system AlphaGo Zero, using
a novel form of reinforcement learning, defeated its predecessor AlphaGo after
learning to play Go from scratch. That meant starting with completely random
play against itself rather than training on Go games played by and with humans.
Generative adversarial networks (GANs).
In this semisupervised learning method,
two networks compete against each other to improve and refine their
understanding of a concept. To recognize what birds look like, for example, one
network attempts to distinguish between genuine and fake images of birds, and
its opposing network attempts to trick it by producing what look very much like
images of birds, but aren’t. As the two networks square off, each model’s
representation of a bird becomes more accurate.
The ability of GANs to generate
increasingly believable examples of data can significantly reduce the need for
data sets labeled by humans. Training an algorithm to identify different types
of tumors from medical images, for example, would typically require millions of
human-labeled images with the type or stage of a given tumor. By using a GAN
trained to generate increasingly realistic images of different types of tumors,
researchers could train a tumor-detection algorithm that combines a much
smaller human-labeled data set with the GAN’s output.
While the application of GANs in precise
disease diagnoses is still a way off, researchers have begun using GANs in
increasingly sophisticated contexts. These include understanding and producing
artwork in the style of a particular artist and using satellite imagery, along
with an understanding of geographical features, to create up-to-date maps of
rapidly developing areas.
Limitation 2: Obtaining massive training data sets
It has already been shown that simple AI
techniques using linear models can, in some cases, approximate the power of
experts in medicine and other fields.4The
current wave of machine learning, however, requires training data sets that are
not only labeled but also sufficiently large and comprehensive. Deep-learning
methods call for thousands of data records for models to become relatively good
at classification tasks and, in some cases, millions for them to perform at the
level of humans.
The complication is that massive data sets
can be difficult to obtain or create for many business use cases (think:
limited clinical-trial data to predict treatment outcomes more accurately). And
each minor variation in an assigned task could require another large data set
to conduct even more training. For example, teaching an autonomous vehicle to
navigate a mining site where the weather continually changes will require a
data set that encompasses the different environmental conditions the vehicle
might encounter.
One-shot learning is a technique that
could reduce the need for large data sets, allowing an AI model to learn about
a subject when it’s given a small number of real-world demonstrations or examples (even one, in some cases). AI’s capabilities
will move closer to those of humans, who can recognize multiple instances of a
category relatively accurately after having been shown just a single sample—for
example, of a pickup truck. In this still-developing methodology, data
scientists would first pre-train a model in a simulated virtual environment
that presents variants of a task or, in the case of image recognition, of what
an object looks like. Then, after being shown just a few real-world variations
that the AI model did not see in virtual training, the model
would draw on its knowledge to reach the right solution.6
This sort of one-shot learning could
eventually help power a system to scan texts for copyright violations or to
identify a corporate logo in a video after being shown just one labeled
example. Today, such applications are only in their early stages. But their
utility and efficiency may well expand the use of AI quickly, across multiple
industries.
Limitation 3: The explainability problem
Explainability is not a new issue for AI
systems. But it has grown along with the
success and adoption of deep learning, which has given rise both to more
diverse and advanced applications and to more opaqueness. Larger and more
complex models make it hard to explain, in human terms, why a certain decision
was reached (and even harder when it was reached in real time). This is one
reason that adoption of some AI tools remains low in application areas where
explainability is useful or indeed required. Furthermore, as the application of
AI expands, regulatory requirements could also drive the need for more
explainable AI models.
Two nascent approaches that hold promise
for increasing model transparency are local-interpretable-model-agnostic
explanations (LIME) and attention techniques. LIME attempts to identify which
parts of input data a trained model relies on most to make predictions in
developing a proxy interpretable model. This technique considers certain
segments of data at a time and observes the resulting changes in prediction to
fine-tune the proxy model and develop a more refined interpretation (for
example, by excluding eyes rather than, say, noses to test which are more
important for facial recognition). Attention techniques visualize those pieces
of input data that a model considers most as it makes a particular decision
(such as focusing on a mouth to determine if an image depicts a human being).
Another technique that has been used for
some time is the application of generalized additive models (GAMs). By using
single-feature models, GAMs limit interactions between features, thereby making
each one more easily interpretable by users. Employing these techniques, among
others, to demystify AI decisions is expected to go a long way toward
increasing the adoption of AI.
CONTINUES
IN PART II
1 comment:
Technology is changing so rapidly; too hard to cope up with its space. Thank God, we are just end-users of finished products. :)
Post a Comment