AI, OUCH
Artificial intelligence is some way from fulfilling its
gleaming promise, as many ill-fated projects have highlighted the perils of
this emerging technology field. Here’s a look at the other side of AI
Amazon's
Hiring Tool
An AI-based tech tool meant to
help hire more women reportedly did just the opposite. The program was meant to
sift through over 50,000 CVs and help firms look for suitable applicants.
However, thanks to being fed CVs mainly from men and learning from them, the AI
tool seemed to discriminate against women. The program was shut down in 2017.
Amazon's
Alexa
The tech giant's voice assistant
spooked users by randomly laughing out loud. Some users reported that Alexa
laughed in response to unrelated commands, while others said it did so
unprompted, startling them. In other cases, the device randomly played music
very loudly in an empty house in Germany, when the user wasn't even home,
prompting neighbours to call the cops to end the "party". Elsewhere,
a unit played porn sounds instead of a children's song.
Facebook's
Trading and Bartering Bots Alice and Bob
The social media and tech
behemoth's Facebook's Artificial Intelligence Research Unit started an
experiment to let two bots learn to trade balls, hats and books. However, the
two instead developed an unintelligible language of their own and had to be
shut down.
Microsoft's
Chat Bot
The maker of Windows operating
system and the Office software suite was tripped up two years ago when Tay, a
bot designed to learn the language of millenials, lost the plot. It went from
being hip — talking about Taylor Swift and Miley Cyrus — to claiming “Bush
caused 9/11” and “Hitler did nothing wrong”.
Google Home
Two Google Home speakers, named
Vladamir and Estragon, were prompted to have an argument like a married couple,
for days. Running on the Cleverbot AI Software, the virtual duo even had
seemingly philosophical conversations, but the chat turned sour, when Vladamir
called Estragon a "manipulative bunch of metal".
Uber's
Self-Driving Programme
Uber's self-driving programme has
been a slow work in progress as technology has struggled to learn the rules of
the road quickly. Case in point: a test ride in California, where the on-board
tech failed to recognise six red lights.
Hanson
Robotics' Sophia
It was part of a conversation that
fuelled the worries of sceptics such as Stephen Hawking. This humanoid robot,
interviewed by its founder at a technology show, spoke of wanting to go to
school, study, make art and even start a business. But when David Hanson, the
firm's founder, instructed her, "Do you want to destroy humans? Please say
no," she replied, "OK, I will destroy humans." Then, in another
interview, the over-smart one declared robots should have more rights than humans.
XiaoBing
& BabyQ
These two robots, developed by
Turing Robot and Microsoft, were ditched in 2017 when they went off the rails
with their answers to some patriotic questions from some of the 800 million
users on the Chinese tech giant Tencent's app. Asked if it loved the (Chinese)
Communist Party, it simply said, "No." Xiaobing, meanwhile told
users, “My Chinese dream is to go to America."
The Promobot
IR77
The Russian robot was undergoing
mobility tests, when it made a break for it. The robot was instructed to roam
around freely in a lab, but escaped from an open door, only to be apprehended
soon after. Again, the robot escaped the facility and got on to a public road,
before its battery ran out, causing a traffic jam.
Rahul
Sachitanand;
ETM14OCT18
No comments:
Post a Comment