Imagine a day where Alexa and Siri can understand and interact with you beyond the medium of plain words but also according to your emotions. Through perceptions, if you are happy, your intelligent assistance may ask you what good things have happened and help you cherish the moment; if you are bored, it may crack a joke or two to lighten up the mood; if you are sad, it may help you offer you words of comfort and healing suggestions. See, from the fictional characters Wall-E to Baymax, we all have seen how friendly, lovable, and human the robots and AIs can become, and one of the major ingredient towards making them such is their inner-ability to intelligently interpret, process, and act according to other’s emotions, the distinct and crucial social ability possessed by humans. Fortunately, AI may offer us just that in real life, and it has been achieved through the area of study and development known as Affective Computing.

Utilizing image data of human facial expressions as one way, such emotional intelligent systems can compute the predictions of that person’s emotional state according to the patterns demonstrated by tracking and analyzing facial muscles’ position and movements, known as “landmarks”. Conversely, other forms of affective computing systems may also utilize data from written or spoken texts, “speech patterns, pulse rate, and other biometrics”, applying similar pattern analysis to compute for demonstrated emotions.

What then is so different about these emotional AIs than the traditional ones? Well, once the machines become able to interpret human’s emotions, they then can act according to their different perceptions just as a real human would in everyday social situations, in effect providing for better cooperation and more natural relationships between machines and humans. In fact, in a recent research conducted by computer scientists, it was concluded that machines that understand and communicate with emotional expressions are more likely to be identified by a human as their social group member than those that are not capable of emotional recognition and activation . With such possible more friendly perception towards the machines, we can indeed expect many creative applications in all technology-integrated industries that will help bring the relationship of humans and machines steps closer.

Currently, many AI tech companies are already working to implement affective computing into our daily lives. Affectiva, an MIT Media Lab spin-off, for example, has already pioneered a human perception AI system that is applied to fields of automotive, media analytics. Claiming their AI can not only “detect nuanced emotions as well as complex cognitive states, activities, interactions, and objects people use”, but it can also help to monitor in real-time driver’s state of mind, whether they are fatigued, distracted, and initiate corresponding actions like alerting the driver or even letting the AI take control of the vehicle until it brings back driver’s attention, thus help improving road safety greatly. On the other hand, Affectiva’s special AI system, Affdex, is being used in monitoring viewers’ reactions and feedback from advertising through different facial cues recognitions. It is capable of generating convenient feedback through ad-viewers all over the world who agree with their emotions to be observed when viewing the ads, which is absolutely a revolutionary way of survey ads that will be massively more scientific and cost-efficient.

As the previous unformidable emotion barrier that stood between humans and AI becomes crumbling down, there opens up another distinct realm of possibilities between the future cooperation and between humans and machines. Affective computing, in fact, might be just another great step to help to forge the utopian vision of the advanced, efficient society where humans and AI can harmonically interact, collaborate, and even befriend each other.

LEAVE A REPLY

Please enter your comment!
Please enter your name here