existAI VideoWhile being occupied with the attention of a host of newcomers like Sora, Luma, Korin, etc., some have already forgotten that the AI video space has ruled the roost for nKingHis name isRunway.
After updating the Gen2 modeling model once last November and then announcing that they were going to start forming teams to begin their foray into world modeling.
And they never moved again.
In a flash, almost 8 months.
Sora, Vidu, LTX, Luma, and Korin took turns blowing up. runway remained unfazed, and even all the good guys, made a terrier chart.
Sunken headstones.
But today, that AI video fieldKing.
He's finally back.
In a quiet moment, he officially released its Gen3 model.
https://runwayml.com/blog/introducing-gen-3-alpha/
Without further ado, let's put up a couple of Gen3 Cases.
I really let out a long sigh, you have to say that in this age of AI, who pulled off the long run of AI videos, and that since the beginning, there is only one name.
Runway.
In August 2022, trained with Stability AI on a world-famous model called Stable Diffusion.
Release of Gen1 in February 2023 with support for video style transparencies.
The release of Gen2 in June 2023 kicked off the prelude to literate and graphic video.
June 2024, a full year of Gen3 with world models, is finally fucking here!!!!!
My mood is really emotional. When Gen2 was just launched last year, I made the trailer for Wandering Earth 3 in order to show the ability of AI video, and this movie, in one fell swoop, made all the film and television people in China, see the charm of AI video.
And then in November, when the Gen2 models were updated with more stable light and shadow textures, I did The Three Bodies; and in February, for Runway's Gen48 contest, I did TheLastGoodbye.
It's safe to say that the gears of my destiny, too, have been making countless entanglements with RUNWAY.
And today, Runway's Gen3 was finally released, albeit in Alpha.
But it also proves that the King, he's always been there.
I've gone through all the officially posted clips and I've probably summarized a few points:.
1. Extremely stable light and shadow
See the official Case for this one.
Prompt:Subtle reflection of a woman in the window of a train moving at super speed in a Japanese city.
The lights of the night.absoluteIt's one of the toughest things to get inside an AI video, not to mention the extremely fast changing night lights on a high speed train, but Gen3 ran this effect, not perfect, but still extremely varied and terribly stable.
2.10s length
As you can see, all of Gen3's cases are 10s clocks.
And odds are, when Gen3 opens up for everyone to use, the length of time everyone generates is also 10s.
Best of all, according to their owner Cristobal Valenzuela, Gen3's model generation is also very fast.
The video generation time is 45s for 5s and 90s for 10s.
fundamental(sports or online gaming) rapid dispatch of an opponentAll second-generation AI videos on the market are now generated faster. After all, it's pretty hard to top that time when you can't move one for a few minutes.
3. Strong aesthetics
A lot of the previous AI video offerings, the aesthetics are just a crock. Really. It was really ugly.
And then there's the one that always adds drama to itself, and crucially, it's the mega-ugly drama. It's god-awful to use, talk about that Luma.
Runway, on the other hand, has always been characterized by strong aesthetics; after all, it started out as a serious film and television company, and went on to make Instant Omniverse'sSpecial Effects, much better than amateurs.
Like these two.
Prompt:Wide symmetrical shot of a painting in a museum. The camera is zoomed in close to the picture.
Prompt:An aerial view of a stealthy figure ascending between tall buildings.
The color scheme and style, love it, really love it.
4. Imagination looks reliable
A lot of models are strong for real world effects, but once you get to some fantasy, metaphysical, sci-fi, or magical images, they're just wasted. It just feels the same as overfitting.
This one from Runway Gen3 looks strong, but it's hard to say practically, as you still have to actually test it in person. But I, myself, am still very confident in the Runway.
for example:
Prompt:A huge, strange creature is seen walking through a window in a run-down city at night, with only a streetlight faintly illuminating the surroundings.
Prompt: through a corridor with flashing lightssuperTime-lapse photography of a piece of silver fabric flying across an entire hallway.
5. Physical laws
The laws of physics have simply become the standard for second generation AI videos. the Runway Gen3's laws of physics look great too. It's basically the industryFirstEchelon level.
Prompt:An older man playing on a piano with a side lighted.
Overall, I liked it.
In the official documentation, Runway describes Gen3alpha as follows.
"Gen-3Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. It is a major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Models."
"Gen-3Alpha is one of a series of upcoming models that Runway is training on a new infrastructure built for large-scale multimodal trainingFirstPersonal. It offers significant improvements in fidelity, consistency, and movement compared to Gen-2, and is a step toward building a universal world model."
But the world model, isn't the most important, since everyone has already seen Sora, Korin and whatnot.
The most important thing is this sentence.
"Available control modes include Motion Brush,advancedCamera control, director mode, and upcoming tools for finer control of structure, style, and movement."
I've had a lot of conversations with a lot of AI creators, and there is a surprising unity of opinion that Luma and Korin are toys.
Because of controllability.
A complete AI video production is going to be super controllable in addition to the laws of physics, or it's going to be like the director of Balloon Man accusing Sora that Roll only works in 1 out of 300 fucking shots.
As for Luma and Korin, both have only two modes, text-born video and graph-born video, and even Korin's graph-born video is not yet online.
It's not enough. It's far from enough.
And Runway's goal, the day they were founded in 2018, was to disrupt the movie industry.
So they're well aware that the controllability thing, it's so important.
So they did camera movement, they did motion brushes, they did character deductions, they did all kinds of fun tools.
It's all about the creators and making a more controllable picture. And the AI video tool I still use the most to this day is still Runway.
Now, Runway's Gen3, straight out of the box, is back with that full set of tools.
It will be open to all in the next few days.
Signaled by the launch of Runway's Gen3Alpha today.
I think that AI video has officially entered the 2.0 era.
Well, that era of total shock.
Embracing Change.
Welcome, too, the return of the King.