A Freepik Spaces workflow now uses Nano Banana 2 for stills, Veed Fabric for closeup lipsync, OmniHuman for directed performance, and Kling 3.0 for motion clips. Split one music video into model-specific stages instead of forcing a single tool to handle everything.

The workflow starts with still-image planning, not video generation. In Techhalla's thread, the creator says they use two Nano Banana 2 nodes to generate the character and setting, then a third node to blend them into one reference image. From there, they build a 3x3 grid of cinematic shots and add numbers via prompting so each frame is easier to extract later.
That grid step matters because it turns one concept image into a shot list. The same thread says Lists in Spaces make it easier to iterate through the numbered frames, and the shared Metal Space includes more than 25 prompts tied to the workflow.
Veed Fabric 1.0 Fast is positioned as the quick lipsync option. In the detailed breakdown, the creator says it needs only the image and audio, with no prompt required, and recommends isolating vocals first when the source is a song. That makes it the simplest path for close-up performance shots.
OmniHuman 1.5 is the control layer. As the same workflow explains, it handles lipsync but also accepts prompts to direct what happens in the scene, which makes it better suited to shots with camera motion or more staged performance. Kling 3.0 then fills out the rest of the video by generating clips from a starting frame or bridging from a start still to an end still. Around that pipeline, Freepik's Forbes-cited push into Magnific Precision suggests the company is also treating upscale and frame-rate enhancement as a final finishing pass rather than part of generation itself.
A shared workflow converts GTA-style stills into photoreal images with Nano Banana 2, then animates them in LTX-2.3 Pro 4K using detailed material, skin, vehicle, and camera prompts. Try it for trailer-style previsualization if you want more control at lower cost.
releaseTopview added Seedance 2.0 to Agent V2, pairing multi-scene generation with a storyboard timeline and Business Annual access billed as 365 days of unlimited generations. That moves longform video workflows toward editable sequences instead of stitched clips.
workflowCreators are moving from V8 calibration complaints to darker film-still scenes, fashion shots, and worldbuilding tests, with ECLIPTIC remakes showing stronger depth and lighting. Retest saved SREF recipes if you rely on V8 for cinematic ideation.
workflowA shared workflow converts GTA-style stills into photoreal images with Nano Banana 2, then animates them in LTX-2.3 Pro 4K using detailed material, skin, vehicle, and camera prompts. Try it for trailer-style previsualization if you want more control at lower cost.
workflowShared Nano Banana 2 workflows now cover turnaround sheets, distinctive facial traits, and photoreal rerenders that keep the framing of a reference image. Use one prompt grammar for concept art, editorial portraits, and animation prep.
This workflow is packed with over 25 prompts! I've made every single on of them available for you in my Metal Space on Freepik. Wait no more, it's right here 👇 freepik.com/pikaso/spaces/…
Finally, AI lipsync that doesn't look janky! I just crafted the ultimate workflow for music videos on Freepik Spaces. All the info and prompts you need, just right here 👇
Upscaled this to 4K (with FPS Boost) using @Magnific_AI's new video upscaler.
Coming in for a landing... made using Seedance 2.0 inside @capcutapp.