Affective Computing for Future Agents
To the extent that future agents will interact with people and with one another via text, speech, and other modes that suggest social interaction, they may benefit from having certain skills of social-emotional intelligence, such as the ability to see if they have annoyed a person. Already, many animated characters have the ability to express emotion (give the appearance of having emotions via facial expressions, gestures, and so forth) but few can recognize any aspects of the emotional response communicated by a user. The agent may sense that you’re clicking on a button, and how many times you have clicked on it, but cannot tell if you’re clicking with interest or boredom, pleasure or displeasure. Agents are therefore handicapped when it comes to responding to affective information, limiting their ability to engage in successful interactions.
This talk briefly highlights research at the MIT Media Lab for giving agents the ability to recognize and respond to emotion. I will describe new hardware and software tools that we have built for recognizing user expressions such as confusion, frustration, and anger, together with an agent we have designed and built that responds to user frustration in a way that aims to help the user feel less frustrated. This “emotionally savvy” agent significantly improved users’ willingness to interact with the system, as measured in a behavioral study involving 70 subjects, two control conditions, and a frustrating game-playing scenario. This talk will also raise and briefly discuss some ethical and philosophical implications of agents that attempt to help assuage or manipulate human emotions. More information about our research, and papers describing our work in more detail together with descriptions of related work at other institutions, can be downloaded from our website at http://www.media.mit.edu/affect.