EVA: Generating Emotional Behavior of Virtual Agents using Expressive Features of Gait and Gaze


We present a novel, real-time algorithm, EVA, for generating virtual agents with various emotions. Our approach is based on using non-verbal movement cues such as gaze and gait to convey emotions corresponding to happy, sad, angry, or neutral. Our studies suggest that the use of EVA and gazing behavior can considerably increase the sense of presence in scenarios with multiple virtual agents. Our results also indicate that both gait and gazing features contribute to the perceptions of emotions in virtual agents.

ACM Symposium on Applied Perception