Video Generation AI Startups RunwayML today launched the new "Expand Video" feature. Users can enter text prompts to build on the original footage.Generate additional content with flexible video scalingThe system is capable of expanding the screen whenMaintaining visual uniformity.
RunwayML says that with multiple extensions, users can realize dynamic camera effects such as "crash zoom" and "pull-back shot," transforming a still image into a "zoom" or "pull-back shot.Dynamic sequences with a cinematic feelIt is understood that the "Expand Video" feature will be available to Gen-3 Alpha Turbo users first. It is understood that the "Expand Video" function will be the first to be gradually opened to Gen-3 Alpha Turbo users.
As 1AI previously reported, in June of this year, Runway released its Gen-3 Alpha video generation model. Compared to its previous flagship video model, Gen-2, the model offers "significant" improvements in generation speed and fidelity, as well as fine-grained control over the structure, style, and motion of the generated video. Anastasis Germanidis, co-founder of Runway, says that Gen-3's video generation time is significantly faster than Gen-2's. It takes 45 seconds to generate a 5-second clip, and 45 seconds to generate a 5-second clip.It takes 90 seconds to generate a 10 second clip.