
Called Gen-4, the model is rolling out to the company's individual and enterprise customers. Runway claims that it can generate consistent characters, locations, and objects across scenes, maintain 'coherent world environments,' and regenerate elements from different perspectives and positions within scenes. 'Gen-4 can utilize visual references, combined with instructions, to create new images and videos utilizing consistent styles, subjects, locations, and more,' Runway wrote in a blog post, '[a]ll without the need for fine-tuning or additional training.' Gen-4 sets a new standard for video generation and is a marked improvement over Gen-3 Alpha. It excels in its ability to generate highly dynamic videos with realistic motion as well as subject, object and style consistency with superior prompt adherence and best-in-class world' pic.twitter.com/w9ACO5boJ7 Runway, which is backed by investors including Salesforce, Google, and Nvidia, offers a suite of AI video tools, including...
learn more