Tuesday, August 21, 2018

AI SPECIAL...... Three ways AI is getting more emotional


Three ways AI is getting more emotional

A combination of deep learning, facial and voice pattern analysis can already decode human emotions

In January, Annette Zimmermann, vicepresident of research at Gartner, proclaimed, “By 2022, your personal device will know more about your emotional state than your own family.” Two months later, a study from the University of Ohio claimed that their algorithm was now better at detecting emotions than people are.
Artificial intelligence systems and devices will soon recognise, interpret, process and simulate human emotions. With companies like Affectiva, BeyondVerbal and Sensay providing plug-and-play sentiment analysis software, the affective computing market is estimated to grow to $41 billion by 2022, as firms like Amazon, Google, Facebook, and Apple race to decode their users’ emotions.
But reading people’s emotions is a delicate business. Emotions are highly personal, and users will have concerns about fears of privacy invasion and manipulation. Before companies dive in, leaders should consider questions like:
1 Does your value proposition naturally lend itself to the involvement of emotions? And can you credibly justify the inclusion of emotional clues for the betterment of the user experience?
2 What are your customers’ emotional intentions when interacting with your brand? What is the nature of the interaction?
3 Has the user given you explicit permission to analyse their emotions? Does the user stay in control of their data?
4 Is your system smart enough to accurately read and react to a user’s emotions?
What is the danger in any given situation if the system should fail — danger for the user, and/ or danger for the brand?

Keeping those concerns in mind, business leaders should be aware of current applications for Emotional AI. These fall roughly into three categories:

Systems that use emotional analysis to adjust their response
In this application, the AI service acknowledges emotions and factors them into its decisionmaking process. However, the service’s output is completely emotion-free. Conversational IVRs (interactive voice response) and chatbots promise to route customers to the right service flow accurately when factoring in emotions. For example, when the system detects a user to be angry, they are routed to a different escalation flow, or to a human.

Systems that provide a targeted emotional analysis for learning purposes
Targeted emotional analysis systems acknowledge and interpret emotions. The insights are communicated to the user for learning purposes. On a personal level, these targeted applications will act like a Fitbit for the heart and mind, aiding self-improvement. Targeted emotional learning systems are also being tested for group settings, such as by analysing the emotions of workers for managers. But scaling to group settings can have an Orwellian feeling — raising concerns about privacy and creativity.

Systems that mimic and
ultimately replace humanto-human interactions
There are now products and services that use conversational UIs and the concept of ‘computers as social actors’ to try to alleviate mental-health concerns. These applications aim to coach users through crises using techniques from behavioural therapy. ‘Ellie’ helps treat soldiers with post-traumatic stress disorder. ‘Karim’ helps Syrian refugees overcome trauma. Digital assistants are even tasked with helping alleviate loneliness among the elderly.
Futurist Richard van Hooijdonk says, “If a marketer can get you to cry, he can get you to buy.” The biggest hurdle to finding the right balance might not be in achieving more effective forms of emotional AI, but in finding emotionally intelligent humans to build them.

— New York Times


No comments: