March 5, 2011 - 1AI has learned from theByteDanceLearned,Dream AI The "Action Mimic" feature was launched today. Users enter through the "Digital Person" portal and simply uploadA character picture and a reference videoThis allows you to generate a dynamic video in which the characters in the picture simulate the movements of the characters in the reference video, as well as to achieveOne-to-one reduction of emotions.
Support for this feature includesPortraits, busts and full bodiesThe different frames within are powered by the ByteDance Intelligent Creation digital human team. The team uses a hybrid explicit and implicit feature-driven approach that can synchronize the restoration of theBody movements and facial expressions in various frames.In terms of face expression control, with its self-developed face motion tokenizer, it is able to accurately capture the expression details from the driving video to enhance the vividness of the generated video.
Currently, Dream AI officially offers 3 action templatesIt also supports users to upload local files by themselves, and the maximum length of the video is 30 seconds. It is reported that "Action Mimic" has been launched on both the App and Web sides of IMO, and the platform will conduct security audits of the video content and output videos.Adding an "AI-generated" watermark.