SenseTimeexistWorld Artificial Intelligence Conference(WAIC) launched the first "controlled" characterVideo Generation of Large Models Vimi, a character video consistent with the target action can be generated through a photo of any style, and it supports multiple driving methods, and can be driven by existing character videos, animations, sounds, texts and other elements.
Unlike image expression control technologies that can only control head movements, Shangtang says that Vimi can not only realize precise character expression control, but also realizeControl the natural limb changes of the people in the photo in the half-body areaand automatically generates character-matching hair, clothing, and background changes.
Meanwhile, Vimi canStabilized generation of 1-minute single-camera character-based videosThe image effect will not deteriorate or be distorted over time, which meets the needs of entertainment and interaction that require stable video generation over a long period of time.
Vimi willFully open to C-support usersThe user only needs to upload high-definition pictures of people from different angles to automatically generate digital doppelgangers and different styles of portrait videos.
Video characters generated by Vimi are no longer just dull movements of the five senses, but are paired with gestures, limbs, hair, etc. to form a more complete and unified character movement, allowing creators to edit and re-create based on the generated video footage.
Shangtang said it will announce more details of Vimi tomorrow, and IT Home will continue to pay attention and bring follow-up reports.