AI-driven image generators have a history that can be traced back to the early 1990s, when creative minds began utilizing AI algorithms to produce art, music, and visual effects. The advent of DALL-E2 in 2021, an image generation model developed by OpenAI, accelerated the widespread acceptance of AI image generators.
Presently, the accuracy, authenticity, and manageability of AI systems for image and video creation are advancing at a rapid pace. One such popular AI image generator is Stable Diffusion, a deep learning text-to-image model that now allows billions of individuals to generate eye-catching art within moments based on textual prompts.
Runway, the startup behind the Stable Diffusion AI image generator, has recently unveiled an AI model called Gen-2. This innovation can transform any text description, such as “turtles soaring through the sky,” into a three-second video clip that corresponds with the given text.
As described on its website, Gen-2 is a versatile AI system capable of producing unique videos using text, images, or video snippets.
Due to security and commercial considerations, Runway has opted not to distribute the model broadly or open-source it like Stable Diffusion. For now, access to the text-to-video model will be limited to a waitlist on Runway’s website and through Discord.
The concept of using AI to create videos from textual inputs is not new; last year, both Meta Platforms and Google released research papers on text-to-video AI models. However, Runway’s co-founder and CEO, Cristobal Valenzuela, asserts that what distinguishes Runway is its commitment to making the text-to-video AI model available to the general public.