“Machines can see or perceive things that humans cannot, and they have an increasingly greater ability to understand and interpret”

13 June, 2018

One of the principal motivations behind research into affective computing is the idea of simulating empathy in a machine. Expert in the subject, Àgata Lapedriza spoke of the research being conducted in this field at the second What if…? Beer organized by the UOC’s eLearn Center

There are very perceptive people who are able to analyse our facial expressions or body language to the point where they can recognize emotions or intentions that most people cannot see. What would happen if machines were able to do that and much more? What will happen the day that machines can perceive and interpret information correctly about micro-expressions, very subtle changes in skin colour, heart rate, breathing or changes in the electromagnetic field surrounding someone, and use all of this information to decipher what we’re feeling and thinking, and even predict what we’re going to do at any time?

All of this and the research being conducted at research centres worldwide were discussed at the second What if…? Beer, held on 5 June and organized by the UOC’s eLearn Center. There to take part was expert Àgata Lapedriza, doctoral degree holder in Computing from the Universitat Autònoma de Barcelona (2009), professor at the UOC Faculty of Computer Science, Multimedia and Telecommunication, and visiting researcher since September 2017 with the MIT Media Lab in the Affective Computing group. Lapedriza introduced the audience to affective computing (also known as emotional AI) as a discipline, namely that it studies and develops systems and devices that can recognize, interpret, process or simulate emotions or feelings. Within the field of computing, this field of research is multidisciplinary and involves other disciplines such as psychology and cognitive science: “one of the principal motivations behind research into affective computing is the idea of simulating empathy in a machine” ,Lapedriza states. “In other words, building machines or robots that can interpret our emotional state and adapt their behaviour to be able to give suitable responses to the emotions that they detect.”

Regarding the current research that forms the basis of this scenario, Lapedriza highlighted that one of the most widely known is that of emotion recognition, be this recognizing and interpreting facial expression, analysing gestures, detecting and interpreting changes in skin colour or small movements imperceptible to the naked eye, physiological signals such as breathing or heart rate, and even analysis of speaking or of a text. “The theoretical basis behind these technologies is that some of our facial expressions or gestures are a sort of automatic reflection of our emotional state”, Lapedriza states. “There are algorithms that can detect subtle changes to skin colour or very subtle movements that cannot be seen by the naked eye. They can even recognize the emotions in a text”.

Other research focuses on sensors, such as bracelets that measure physiological signals, and there are even researchers that are developing devices to recognize the words someone is thinking by placing sensors on the person’s face: “when someone thinks of a word as though they’re about to say it, the muscles that articulate the words are activated, move and emit electric signals that can be measured, even though the person hasn’t spoken aloud”. Also, using sensors and apps, we can predict medical disorders such as epileptic fits, or use them with people with an autism spectrum disorder (to help them with communication or emotional regulation) or to predict emotional states and create systems that can give individual advice to improve our wellbeing, or for the early detection of depression or other types of mental disorders or illnesses. Besides this, there is also research into the use of wireless signals to be able to recognize subtle changes in breathing and heart rate. “These technologies can be used to detect presence, movement, posture and gestures”, Lapedriza states. “We can detect everything from the next room without needing a camera”.

Out of all this emerged a question among the participants in the debate: what risks might the evolution of artificial intelligence have and what ethical problems may arise from it? In response, Lapedriza commented: “it’s true to say that lately there’s been growing interest and concern for this matter. At MIT (and probably other universities), new courses have emerged relating to ethics and technology, and, generally speaking, it’s an issue people talk about a lot. I think it’s important to be aware of the risks, that we need to talk about it and that regulations need to be put in place”. However, Lapedriza adds that “it’s important also to highlight the benefits that these technologies might have. Besides the medical applications we’ve discussed, there are also projects relating to learning. Many experiments have been conducted with teaching methodologies with personal robots, both with children and with the elderly, and in some cases it’s been shown experimentally that robots with empathy can contribute satisfactorily to the learning process, as companions and as teachers. Besides this, there are also research projects to develop personal robots that can help in mediation processes or rescue operations”.

The event was held amid a relaxed atmosphere with great interest in the subject by participants in collaboration with Estrella Damm. The next What if…? Beer will be in October 2018 and will feature another subject in which research makes our imagination take flight.

Organized by eLearn Center in collaboration with:

 

 


 

BIO

Àgata Lapedriza is a professor at the Universitat Oberta de Catalunya (UOC), in the Faculty of Computer Science, Multimedia and Telecommunications. She holds a degree in Mathematics from the University of Barcelona (2003) and a doctoral degree in Computing from the Universitat Autònoma de Barcelona (2009). She wrote her doctoral thesis at the Computer Vision Center. Between 2012 and 2015, she worked as a visiting researcher in the Computer Science and Artificial Intelligence Lab at the Massachusetts Institute of Technology (MIT). Since September 2017, she has been a visiting researcher at the MIT Media Lab in the Affective Computing group. Her research interests are related to image understanding, scene recognition and characterization and affective computing. Some of her research motivations are the use of technology for wellbeing and education. She is also interested in the use of technology for image analysis and large-scale video. At the UOC, Àgata Lapedriza is director of the Master’s Degree in Bioinformatics and Biostatistics and is a member of the Scene Understanding and Artificial Intelligence research group.

 

 

 

(Visited 23 times, 1 visits today)
About the author
Journalist
Comments
Add comment