Runway Gen-3

A few weeks ago we looked at Runway's new Gen-3 video generation model and how it can take a single image and use it as the start or end frame of a generated video. Since then, they've unveiled a new feature for 'video to video' generation, which generates video guided by another video.

To test this process, I rendered out a simple previz-style shot where the camera flies down from above a city street and comes to rest on a car under a bridge.

I uploaded the shot to Runway and prompted for "Dramatic cinematography, camera swoops down over London and onto a classic black sedan parked under a bridge, cool greenish bluish tones", and this is what Gen-3 generated:

Pretty interesting! Obviously the quality isn't there as a final output, but it does capture some of the vibe of what a final shot could look like. The question is, how helpful is that for previz, storyboarding and beyond?

Film and TV studio Lionsgate appear to see potential in Runway's offering, they've signed a deal with the AI company to train a bespoke model on their own content and claim it will save millions. On the flipside, Runway and other AI companies have been accused of using content without permission to train their public models, something that factors into AI use policies at broadcasters and studios, as well as VFX companies like us.

Previous
Previous

Gen-3: Experimenting with Reanimation

Next
Next

Loopy