ElevenLabs launched Flows, a node-based canvas inside ElevenCreative that chains image, video, voice, music, SFX, lip sync, and voice changing in one workspace. Use it to keep context across the pipeline instead of re-exporting between apps.

Flows is a new visual pipeline builder inside ElevenCreative. In the core product rundown, it is described as a node-based canvas that combines more than 35 image and video models with ElevenLabs' own audio stack, including text-to-speech, music, sound effects, lip sync, and voice changing.
The practical change is less about any single generation model and more about continuity across steps. The main demo shows the company aiming to keep context inside one workspace rather than having creators bounce from image app to video tool to voiceover tool and then into editing for final assembly. The same launch thread says API access is coming soon. It also includes a sponsorship disclosure, so the strongest confirmed facts here are the launch timing, tier availability, model count, and integrated tool list.
The clearest creator angle in the launch material is repeatable multi-asset production. Hasan Toor's examples map Flows to e-commerce drops that auto-generate matching imagery, video, and narration; agency workflows that keep campaign output stylistically consistent across clients; and filmmaking pipelines that move from concept frames to rough-cut materials in one place.
A second thread segment, shown in the workflow explainer, argues that the bottleneck in current AI production is not generation quality alone but all the export and handoff steps between tools. Flows' appeal is that voice, music, SFX, and visual generation sit on the same canvas, which could matter most for teams making lots of variants rather than one polished asset at a time.
A shared workflow converts GTA-style stills into photoreal images with Nano Banana 2, then animates them in LTX-2.3 Pro 4K using detailed material, skin, vehicle, and camera prompts. Try it for trailer-style previsualization if you want more control at lower cost.
releaseTopview added Seedance 2.0 to Agent V2, pairing multi-scene generation with a storyboard timeline and Business Annual access billed as 365 days of unlimited generations. That moves longform video workflows toward editable sequences instead of stitched clips.
workflowCreators are moving from V8 calibration complaints to darker film-still scenes, fashion shots, and worldbuilding tests, with ECLIPTIC remakes showing stronger depth and lighting. Retest saved SREF recipes if you rely on V8 for cinematic ideation.
workflowA shared workflow converts GTA-style stills into photoreal images with Nano Banana 2, then animates them in LTX-2.3 Pro 4K using detailed material, skin, vehicle, and camera prompts. Try it for trailer-style previsualization if you want more control at lower cost.
workflowShared Nano Banana 2 workflows now cover turnaround sheets, distinctive facial traits, and photoreal rerenders that keep the framing of a reference image. Use one prompt grammar for concept art, editorial portraits, and animation prep.
Real use cases that are actually wild: → E-commerce: new product drops → auto-generate matching imagery, video, narration → Agencies: one pipeline, consistent quality across every client campaign → Filmmakers: concept to rough cut without leaving one workspace → Performance Show more
ElevenLabs already had the best voice models on earth. Now they've built the canvas that connects everything else. Flows is live today inside ElevenCreative. Available on all tiers. API access coming soon which means this is about to get even more dangerous. Go build Show more