Affective Computing


Overview

We present novel algorithms for identifying emotion, dominance, and friendliness characteristics of pedestrians, as well detecting deceptive traits in walking, based on their motion behaviors. We also propose models for conveying emotions, friendliness, and dominance traits in virtual agents. We present applications of our algorithms to simulate interpersonal relationships between virtual characters, facilitate socially-aware robot navigation, identify perceived emotions from videos of walking individuals, and increase the sense of presence in scenarios involving multiple virtual agents. We also present a dataset of videos of walking individuals with gaits and labeled emotions.
Currently, our efforts are focused on predicting perceived emotions from multiple modalities such as faces, gaits, speech, and text, by investigating the correlation between these modalities. This direction will also lead us to be able to infer or generate missing modalities.
We are also working on developing artificial intelligent tools to improve mental health diagnosis for telehealth behavioral health services during COVID-19. More details on this research here.

Project Conference/Journal Year
Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning ACMMM 2021
Affect2MM: Affective Analysis of Multimedia Content Using Emotion Causality CVPR 2021
Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data For Reading Task Identification ICASSP 2021
Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agent IEEE VR 2021
Generating Emotive Gaits for Virtual Agents Using Affect-Based Autoregression ISMAR 2020
Emotions Don’t Lie: A Deepfake Detection Method using Audio-Visual Affective Cues ACM Multimedia 2020
Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping ECCV 2020
EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege’s Principle CVPR 2020
M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues AAAI 2020
STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits AAAI 2020
EVA: Generating Emotional Behavior of Virtual Agents using Expressive Features of Gait and Gaze ACM SAP 2019
FVA: Modeling Perceived Friendliness of Virtual Agents Using Movement Characteristics ISMAR 2019
Modeling Data-Driven Dominance Traits for Virtual Characters using Gait Analysis IEEE TVCG 2019
Identifying Emotions from Walking using Affective and Deep Features arXiv 2019
Pedestrian Dominance Modeling for Socially-Aware Robot Navigation ICRA 2019
The Emotionally Intelligent Robot: Improving Social Navigation in Crowded Environments IROS 2019
Data-Driven Modeling of Group Entitativity in Virtual Environments VRST 2018
Classifying Group Emotions for Socially-Aware Autonomous Vehicle Navigation CVPR Workshop 2018
Aggressive, Tense or Shy? Identifying Personality Traits from Crowd Videos IJCAI 2017
Sociosense: Robot navigation amongst pedestrians with social and psychological constraints IROS 2017
F2FCrowds: Planning Agent Movements to Enable Face-to-Face Interactions PRESENCE 2017
Generating Virtual Avatars with Personalized Walking Gaits using commodity hardware ACM Multimedia 2017
PedVR: Simulating Gaze-Based Interactions between a Real User and Virtual Crowds VRST 2016