AI SPECIAL Applying
artificial intelligence for social good PART III
4. Risks to be managed
AI tools and techniques can be misused by
authorities and others who have access to them, so principles for their use
must be established. AI solutions can also unintentionally harm the very people
they are supposed to help.
AI contains great opportunity and the risk it entails.
An analysis of our use-case library found
that four main categories of risk are particularly relevant when AI solutions
are leveraged for social good: bias and fairness, privacy, safe use and
security, and “explainability” (the ability to identify the feature or data set
that leads to a particular decision or prediction).
Bias in AI may perpetuate and aggravate
existing prejudices and social inequalities, affecting already-vulnerable
populations and amplifying existing cultural prejudices. Bias of this kind may
come about through problematic historical data, including unrepresentative or
inaccurate sample sizes. For example, AI-based risk scoring for
criminal-justice purposes may be trained on historical criminal data that
include biases (among other things, African Americans may be unfairly labeled
as high risk). As a result, AI risk scores would perpetuate this bias. Some AI
applications already show large disparities in accuracy depending on the data
used to train algorithms; for example, examination of facial-analysis software shows an error rate of 0.8 percent for light-skinned
men; for dark-skinned women, the error rate is
34.7 percent.
One key source of bias can be poor data
quality—for example, when data on past employment records are used to identify
future candidates. An AI-powered recruiting tool used by one tech company was
abandoned recently after several years of trials. It appeared to show
systematic bias against women, which resulted from patterns in training data
from years of hiring history. To counteract such biases, skilled and diverse
data-science teams should take into account potential issues in the training
data or sample intelligently from them.
Breaching the privacy of personal information could cause harm
Privacy concerns concerning sensitive
personal data are already rife for AI. The ability to assuage these concerns
could help speed public acceptance of its widespread use by profit-making and
nonprofit organizations alike. The risk is that financial, tax, health, and
similar records could become accessible through porous AI systems to people without
a legitimate need to access them. That would cause embarrassment and,
potentially, harm.
Safe use and security are essential for societal good uses of AI
Ensuring that AI applications are used
safely and responsibly is an essential prerequisite for their widespread
deployment for societal aims. Seeking to further social good with dangerous
technologies would contradict the core mission and could also spark a backlash,
given the potentially large number of people involved. For technologies that
could affect life and well-being, it will be important to have safety
mechanisms in place, including compliance with existing laws and regulations.
For example, if AI misdiagnoses patients in hospitals that do not have a safety
mechanism in place—particularly if these systems are directly connected to
treatment processes—the outcomes could be catastrophic. The framework for
accountability and liability for harm done by AI is still evolving.
Decisions made by complex AI models will need to become more
readily explainable
Explaining in human terms the results from
large, complex AI models remains one of the key challenges to acceptance by users and regulatory authorities. Opening
the AI “black box” to show how decisions are made, as well as which factors,
features, and data sets are decisive and which are not, will be important for
the social use of AI. That will be especially true for stakeholders such as
NGOs, which will require a basic level of transparency and will probably want
to give clear explanations of the decisions they make. Explainability is
especially important for use cases relating to decision making about individuals
and, in particular, for cases related to justice and criminal identification,
since an accused person must be able to appeal a decision in a meaningful way.
Mitigating risks
Effective mitigation strategies typically
involve “human in the loop” interventions: humans are involved in the decision
or analysis loop to validate models and double-check results from AI solutions.
Such interventions may call for cross-functional teams, including domain
experts, engineers, product managers, user-experience researchers, legal
professionals, and others, to flag and assess possible unintended consequences.
Human analysis of the data used to train
models may be able to identify issues such as bias and lack of representation.
Fairness and security “red teams” could carry out solution tests, and in some
cases third parties could be brought in to test solutions by using an
adversarial approach. To mitigate this kind of bias, university researchers
have demonstrated methods such as sampling the data with an understanding of
their inherent bias and creating synthetic data sets based on known statistics.
Guardrails to prevent users from blindly
trusting AI can be put in place. In medicine, for example, misdiagnoses can be
devastating to patients. The problems include false-positive results that cause
distress; wrong or unnecessary treatments or surgeries; or, even worse, false
negatives, so that patients do not get the correct diagnosis until a disease
has reached the terminal stage.
Technology may find some solutions to these
challenges, including explainability. For example, nascent approaches to the transparency of models
include local-interpretable-model-agnostic (LIME) explanations,
which attempt to identify those parts of input data a trained model relies on
most to make predictions.
5.
Scaling up the use of AI for social good
As with any technology deployment for
social good, the scaling up and successful application of AI will depend on the
willingness of a large group of stakeholders—including collectors and
generators of data, as well as governments and NGOs—to engage. These are still
the early days of AI’s deployment for social good, and considerable progress
will be needed before the vast potential becomes a reality. Public- and
private-sector players all have a role to play.
Improving data accessibility for social-impact cases
A wide range of stakeholders owns,
controls, collects, or generates the data that could be deployed for AI
solutions. Governments are among the most significant collectors of
information, which can include tax, health, and education data. Massive volumes
of data are also collected by private companies—including satellite operators,
telecommunications firms, utilities, and technology companies that run digital
platforms, as well as social-media sites and search operations. These data sets
may contain highly confidential personal information that cannot be shared without
being anonymized. But private operators may also commercialize their data sets,
which may therefore be unavailable for pro-bono social-good cases.
Overcoming this accessibility challenge
will probably require a global call to action to record data and make it more
readily available for well-defined societal initiatives.
Data collectors and generators will need
to be encouraged—and possibly mandated—to open access to subsets of their data
when that could be in the clear public interest. This is already starting to
happen in some areas. For example, many satellite data companies participate in
the International Charter on Space and Major Disasters, which
commits them to open access to satellite data during emergencies, such as the
September 2018 tsunami in Indonesia and Hurricane Michael, which hit the US
East Coast in October 2018.
Close collaboration between NGOs and data
collectors and generators could also help facilitate this push to make data
more accessible. Funding will be required from governments and foundations for
initiatives to record and store data that could be used for social ends.
Even if the data are accessible, using
them presents challenges. Continued investment will be needed to support
high-quality data labeling. And multiple stakeholders will have to commit
themselves to store data so that they can be accessed in a coordinated way and
to use the same data-recording standards where possible to ensure seamless
interoperability.
Issues of data quality and of potential
bias and fairness will also have to be addressed if the data are to be deployed
usefully. Transparency will be a key for bias and fairness. A deep
understanding of the data, their provenance, and their characteristics must be
captured, so that others using the data set understand the potential flaws.
All this is likely to require
collaboration among companies, governments, and NGOs to set up regular data
forums, in each industry, to work on the availability and accessibility of data
and on connectivity issues. Ideally, these stakeholders would set global
industry standards and collaborate closely on use cases to ensure that
implementation becomes feasible.
Overcoming AI talent shortages is essential for implementing
AI-based solutions for social impact
The long-term solution to the talent
challenges we have identified will be to recruit more students to major in
computer science and specialize in AI. That could be spurred by significant
increases in funding—both grants and scholarships—for tertiary education and
for PhDs in AI-related fields. Given the high salaries AI expertise commands
today, the market may react with a surge in demand for such an education,
although the advanced math skills needed could discourage many people.
Sustaining or even increasing current
educational opportunities would be helpful. These opportunities include “AI
residencies”—one-year training programs at corporate research labs—and
shorter-term AI “boot camps” and academies for midcareer professionals. An
advanced degree typically is not required for these programs, which can train
participants in the practice of AI research without requiring them to spend
years in a PhD program.
Given the shortage of experienced AI
professionals in the social sector, companies with AI talent could play a major
role in focusing more effort on AI solutions that have a social impact. For
example, they could encourage employees to volunteer and support or coach
noncommercial organizations that want to adopt, deploy, and sustain high-impact
AI solutions. Companies and universities with AI talent could also allocate
some of their research capacity to new social-benefit AI capabilities or
solutions that cannot otherwise attract people with the requisite skills.
Overcoming the shortage of talent that can
manage AI implementations will probably require governments and educational
providers to work with companies and social-sector organizations to develop
more free or low-cost online training courses. Foundations could provide
funding for such initiatives.
Task forces of tech and business
translators from governments, corporations, and social organizations, as well
as freelancers, could be established to help teach NGOs about AI through
relatable case studies. Beyond coaching, these task forces could help NGOs
scope potential projects, support deployment, and plan sustainable road maps.
From the modest library of use cases that
we have begun to compile, we can already see tremendous potential for using AI
to address the world’s most important challenges. While that potential is
impressive, turning it into reality on the scale it deserves will require
focus, collaboration, goodwill, funding, and a determination among many
stakeholders to work for the benefit of society. We are only just setting out
on this journey. Reaching the destination will be a step-by-step process of
confronting barriers and obstacles. We can see the moon, but getting there will
require more work and a solid conviction that the goal is worth all the
effort—for the sake of everyone
By Michael Chui, Martin Harrysson, James Manyika, Roger Roberts, Rita Chung, Pieter Nel, and Ashley van Heteren
https://www.mckinsey.com/featured-insights/artificial-intelligence/applying-artificial-intelligence-for-social-good?cid=other-eml-alt-mgi-mck-oth-1811&hlkid=485b551e30be4defadc2592045042ed6&hctky=1627601&hdpid=465daed6-1d79-427b-9ff3-417da5844a17
No comments:
Post a Comment