Runway's new web app turns a prompt or starter image into a cut scene with dialogue, sound effects and shot pacing. Creators can now block whole sequences instead of stitching isolated clips.

Runway’s new Multi-Shot App is framed as a scene builder, not just another text-to-video endpoint. In the announcement, the company says a single prompt can produce dialogue, sound effects, intentional cuts, pacing, and cinematic framing, and that the workflow supports either image-to-video or pure text-to-video generation. That matters for filmmakers and designers because the unit of generation shifts from isolated shots to a blocked sequence.
Runway also says the tool is available now on the web app, with its follow-up post linking directly to the product. The company’s examples make the pitch concrete: instead of showing one polished hero clip, the thread shows short scene fragments built from plain-language prompts, including character exchanges and timing-based beats.
The strongest pattern in Runway’s demo thread is that the prompts read more like scene briefs than camera commands. The squirrel-and-seagull example in the first demo starts from a simple comic premise, while the tension beat strips the wording down even further to "The two sit in awkward silence as the tension rises," suggesting the app is inferring coverage and pacing from narrative intent rather than requiring detailed shot lists.
The other demos broaden that range. the mice argument uses a dialogue-heavy premise with a clear comic reversal; the monster therapy scene stages multiple characters reacting inside a single setup; and the lion-on-couch clip pushes toward more photoreal character performance. Then the swamp fantasy prompt shifts gears completely, turning a long-form descriptive prompt about a humanoid toad, an old hag, and a foggy marsh into a more overtly cinematic fantasy beat.
Across those examples, the creative takeaway is less about one visual style than about structure: prompts that specify relationship, conflict, or a tiny dramatic turn seem to be what Runway wants the model to expand into cuts, sound, and scene rhythm.
Zopia lets creators start from an idea, script or images, pick a video model, then auto-generate characters, storyboards, clips and 4K exports. More of the film pipeline is bundled into one app.
updateSeedance 2.0 is now showing up across CapCut Video Studio, Dreamina and Pippit with multi-scene timelines and shot templates. Creators can use it to move from single clips to editable long-form production.
releasePosts report Nano Banana 2 now offers 4K image output, and creators are using it for poster systems, hidden-object layouts and character sheets. Higher-res stills should travel better into video, branding and print workflows.
updateOfficial and partner demos show Uni-1 handling localized edits, dense layouts, manga generation and Pouty Pal chibis. Creators can reuse one model across avatar, editorial and comic workflows.
releaseTopaz says Starlight Precise 2.5 improves realism, cuts plastic-looking artifacts and upscales AI video to 4K in Astra, partner apps and API. Use it as a finishing pass when generated footage needs cleanup.
Introducing the Multi-Shot App. An easy way to go from a simple prompt to a thoughtfully crafted scene. All with dialogue, sound effects, intentional cuts, pacing and cinematic framing. Start from an image or go purely Text to Video for total creative exploration. Available now Show more
Prompt: A cinematic feature film about humanoid-toad wearing a wide brimmed hat and a long cloak visits an old hag to get medicine from her potion shop in a foggy marsh in the swamp.