OpenAI Recently released Sora model, which can be described based on the text entered by the user.Generate a piece of video content.
The model can deeply simulate the real physical world, marking a major leap forward in AI's ability to understand and interact with real-world scenarios.
Sora will strictly follow the prompt words entered by the user and can create videos up to a minute long, maintaining a high visual quality. This opens up endless possibilities for any artist, filmmaker, or student who needs to make videos.
OpenAI says it has now handed over to Team Red, a group of people who play the role of enemies or competitors in cybersecurity exercises, to test Sora and assess potential hazards or risks.
In addition, OpenAI has invited a team of creative professionals to test it for feedback on its usefulness in a professional environment, and OpenAI plans to improve Sora based on this feedback to ensure that it effectively meets the needs of its users. The demo video is surreal.
Sora can create complex scenes that include multiple people, specific movement types and detailed backgrounds. It generates videos that accurately reflect user cues. For example, Sora can create a video of a fashionable woman walking down a neon Tokyo street, a giant mammoth in the snow, or even a movie trailer for an astronaut adventure.
However, Sora has limitations, including challenges in modeling the physical properties of complex scenarios and understanding specific causal scenarios. openAI says Sora may also obfuscate spatial details and have trouble accurately describing time events.