An AI-powered interactive experience, the AI’M SPEAKING concept is inspired by the painted allegories of the Baroque period. The Baroque interest in human psychology had as a corollary the intense scrutiny of the physical world that resulted from the 16th and 17th centuries optical discoveries. Allegories from the Baroque period deployed rhetorical qualities that conveyed subtle meaning throughout spectacular, and sometimes theatrical experiences.
A gallery of interactive videos, the AI’M SPEAKING exhibit draws from Baroque dramatization with a series of iridescent interactive portraits evoking the inner life of psoriasis patients. Emotional intelligence and Self-perception are central themes of the concept. The portrait series embeds facial analysis and speech recognition technologies enabling the artworks to react to visitors’ facial expressions and interact with them through speech.
The characters’ appearance, all together intimate and spectacular, nocturnal and fluorescent, expresses the many challenges of living with a skin that attracts unwanted attention and the desperate attempt to hide it under layers of clothes and makeup. The interactive experience builds on this dramatic dynamic and the emotional roller-coaster it entails. At first glance, each subject seems the still portrait of a person asleep. As visitors get closer to the artwork, the character awakens, makes eye contact, and directly engages the visitor by asking a question. Each segment of the conversation between characters and visitors is powered by Machine Learning and Natural Language Processing and scripted upon real patient testimonials.
While interacting with these ‘living’ portraits, visitors become part of a patient-focused conversation, from which they deepen their understanding of the unique challenges of living with psoriasis. The exhibit ultimately challenges the myths surrounding psoriasis while raising awareness and building empathy towards patients.
This concept integrates Computer Vision, Deep Learning and Natural Language Processing. They enable naturalistic interactions between artificial agents and users. Interactions are powered by cognitive sensors and rendered by real-time motion retargetting and lip-sync video technologies. Characters are created using interwoven video and CGI, which creates a realistic animation and more natural transitions between 'sleep' and interaction states. Enhanced with motion sensors, cameras, 3D microphones and a sound system, characters’ behavior and speech result from real-time data processing combined with underlying scripted branching scenarios. Motion is detected every time a visitor steps in a defined perimeter. Facial and silhouette analysis, eye-gaze tracking, and speech recognition enable artificial characters to detect and respond naturalistically to visitors.
Every project we lead is conceptualized from a holistic perspective, and integrates the most relevant and adapted technologies. We create environments, storyworlds, and experiences that put the emphasis on seamless and delightful interactions between digital environments, artificial agents and human users.