AI-driven virtual conversational assistant. Dost's gestures, facial expressions, and speech are generated in real-time using state-of-the-art research in affective computing, emotional modeling, character and speech generation, and virtual reality. Dost combines our generative ork generation work to create realistic interactive avatars designed to have conversations and behave like real humans, not only in terms of content but also in emotional and behavioral expressiveness. We are also using Dost to create Virtual Therapists to bride the demand-supply gap in telemental health.
As the world increasingly uses digital and virtual platforms for everyday communication and interactions, there is a heightened need to create highly realistic virtual agents endowed with social and emotional intelligence. Interactions between humans and virtual agents are being used to augment traditional human-human interactions in different Metaverse applications. Human-human interactions rely heavily on a combination of verbal communications (the text), inter-personal relationships between the people involved (the context), and more subtle non-verbal face and body expressions during communication (the subtext). While context is often established at the beginning of interactions, virtual agents in social VR applications need to align their text with their subtext throughout the interaction, thereby improving the human users’ sense of presence in the virtual environment. Affective gesticulation and gaits are an integral component in subtext, where humans use patterns of movement for hands, arms, heads, and torsos to convey a wide range of intent, behaviors, and emotions.
Exploring virtual environments/Metaverse is an integral part of immersive virtual experiences. Real walking is known to provide benefits to sense of presence and task performance that other locomotion interfaces cannot provide. Using an intuitive locomotion interface like real walking has benefits to all virtual experiences for which travel is crucial, such as virtual house tours and training applications. Our work on Redirected walking (RDW) as a locomotion interface allows users to naturally explore VEs that are larger than or different from the physical tracked space, while minimizing how often the user collides with obstacles in the physical environment.
In recent years, there has been a renewed interest in sound rendering for interactive Metaverse/XR applications. Our group has been working on novel algorithms for sound synthesis, as well as geometric and numeric approaches for sound propagation..
We developed novel approaches for creating user-centric social experiences in virtual environments that are populated with both user controlled avatars, and intelligent virtual agents. We propose algorithms to increase the motion, and behavioral realism of the virtual agents, thus creating immersive virtual experiences. Agents are capable of finding collision-free paths in complex environments, and interacting with the avatars using natural language processing and generation, as well as non-verbal behaviours such as gazing, gesturing, facial expressions etc. We also built a multi-agent simulation framework that can generate pausible behaviors and full body motion for hundreds of agents at interactive rates.