
Retake video model edits 20s clips at $0.10/s – in‑shot directing hits production
Executive Summary
Retake finally left the lab today, and your feed shows why it matters. LTX’s new model is live inside LTX Studio and on fal, Replicate, Weavy, Runware, RunDiffusion, and ElevenLabs, letting you re‑direct performances inside a finished shot instead of nuking the whole clip. Replicate’s endpoint takes uploads up to ~20 seconds and 100 MB, outputs 1080p, and charges about $0.10 per second of input, so “one more take” becomes a prompt, not a reshoot.
Creators are treating it like Photoshop for video. Techhalla’s tutorials show a door’s material fixed mid‑scene and a single wine glass recolored via a tiny prompt bar and time‑range selector, while LTX’s own reels swap line reads, facial expressions, even a rogue penguin at 0:02 without touching timing or background. Under the hood, hosts expose separate knobs for emotion, dialogue, camera motion, and props, turning a locked cut into something closer to a tweakable 3D scene.
Zooming out, this lines up neatly with fal’s Lucy Edit Fast, which is doing localized 720p video edits in about 10 seconds at $0.04 per second. The center of gravity is moving from “regenerate the shot” to “surgically patch what’s wrong,” which is exactly where professional workflows live.
Top links today
- Z-Image Turbo text-to-image on fal
- Luma Terminal Velocity Matching research overview
- Flux 2 Tiny Autoencoder open source repo
- Topaz upscale and enhancement on Comfy Cloud
- Train custom FLUX.2 LoRA models on fal
- Retake prompt-based video editor on Replicate
- Isaac 0.1 grounded vision model blog
- Higgsfield unlimited image models Black Friday offer
- Free Nano Banana pixel art converter web app
- Nano Banana Pro infinite pixel asset workflow
- Flux 2 image model on WaveSpeedAI
- Flux 2 production-grade model deep dive
- Agent-based video story tutorial with Nano Banana Pro
- Online multi-model LLM comparison playground
- Perplexity AI virtual try-on feature announcement
Feature Spotlight
Retake day‑0: in‑shot directing goes mainstream
Directable video goes wide: LTX Retake lands on fal, Replicate, Runware and Studio with creator guides—edit acting, dialogue, and framing within the same shot, no full re‑render.
LTX’s Retake is everywhere in today’s feed: creators show re‑performing lines, reframing emotion, and fixing continuity inside the same shot. Multiple hosts went live and detailed guides landed.
Jump to Retake day‑0: in‑shot directing goes mainstream topicsTable of Contents
🎬 Retake day‑0: in‑shot directing goes mainstream
LTX’s Retake is everywhere in today’s feed: creators show re‑performing lines, reframing emotion, and fixing continuity inside the same shot. Multiple hosts went live and detailed guides landed.
Retake launches as an in‑shot directing model across Studio, fal, Replicate and Runware
Lightricks and LTX Studio’s new Retake model is out of the lab and already running both inside LTX Studio and on multiple infra hosts, giving creatives a way to re‑direct performances inside the same rendered shot instead of re‑generating full clips. LTX positions it around three core actions—rephrasing dialogue, reshaping emotion, and reframing moments after a video is rendered. feature overview
An official partner list confirms Retake is live in LTX Studio itself and via fal, Replicate, Weavy, Runware, RunDiffusion, and ElevenLabs, so you can hit it either through a polished UI or direct APIs depending on your workflow. partner rollout fal announces Retake as "true directorial control" with promptable dialogue changes and partial shot remixes, aimed at teams iterating narrative or branding without nuking the underlying take. fal hosting Replicate exposes it as a hosted model with uploads up to ~20 seconds, 100 MB, and 1080p output, charging around $0.10 per second of input and documenting how to target specific time ranges and attributes like emotion or camera motion rather than regenerating the whole clip. (replicate launch, product specs) Runware’s D0 drop adds another option with API knobs to alter camera angle, script, audio, or on‑screen action independently, pitched at teams that want a controllable post‑generation pass over already‑approved shots. runware api For you as a filmmaker, editor, or motion designer, the change is that Retake treats a finished clip more like a 3D scene: you can nudge performance, pacing, or framing while preserving the original motion and sound, and you can do it from whatever stack you already use—LTX’s own Studio, a Replicate or fal pipeline, or Runware’s API if you’re embedding this in tools.
Creators show Retake fixing dialogue, props, and continuity inside a single shot
Early users are already treating Retake like "Photoshop for video," using prompts to surgically fix shots instead of re‑cutting or re‑rendering entire sequences. One walkthrough starts from a messy house clip, then dials it up into a full fire scene, adjusts stormy lighting, and even has an actor re‑perform a slap—all from the same base footage—framing it as "hands down the best video editing model out there right now" for promptable fixes. photoshop explainer
Techhalla’s tutorial goes deeper on continuity work: they use Retake to fix a single door’s material mid‑scene and to subtly recolor a wine glass, with the interface showing a small prompt bar and a time‑range selector so only the targeted region changes while the rest of the shot stays locked. door fix demo

LTX’s own demo reel echoes this pattern, showing A/B clips where the actor’s expression and gesture change while timing and background remain identical, which is exactly what you need when a performance note comes in after picture lock. feature overview Even the jokey "rogue penguin at 0:02" clip from LTX highlights the same thing: fix that one stray artifact, keep the rest of the spot—and your budget—intact. penguin gag For working creatives, the takeaway is that Retake isn’t another "generate a whole new video" toy; it’s behaving like a surgical grade, in‑place editor: trim an awkward line read, calm or intensify an expression, clean up a prop, or remove a background glitch, all without asking the model to reinvent the shot. That makes it a realistic candidate for late‑stage tweaks on client work where continuity and timing are non‑negotiable but you still want AI’s flexibility.
🚗 Keyframes, poses, and start–end shots
Excludes Retake (covered as the feature). Creators chain NB Pro with Veo/Kling and use Flux 2 pose control on Higgs to storyboard precise motion, then animate with start/end frames.
Flux 2 pose mannequins on Higgs turn into Kling 2.5 start–end animations
Techhalla lays out a full Higgsfield workflow where you generate a poseable mannequin with Flux 2, reuse it to define multiple poses, transfer those poses onto your own photo, and then animate between start and end frames with Kling 2.5 pose control guide start end animation. It’s aimed squarely at solo filmmakers and UGC creators who want precise control over character body language while still moving fast.

The process runs in stages: first you create a neutral "pose control" mannequin (optionally giving it a few traits), then you regenerate it in different stances while keeping style consistent, effectively building a pose library you can reuse across shoots mannequin prompt. Next, you feed Higgs both your original portrait and a mannequin pose, prompting Flux 2 to "transfer" that body position while preserving your identity and outfit pose transfer step. There’s even a reverse‑engineering trick: grab a still from any reference video and have Flux 2 rebuild a mannequin in that exact pose, so you can mimic choreography or iconic frames throughout a sequence reverse pose trick. Finally, you hand two frames (start and end pose) to Kling 2.5 in Higgs and let it interpolate a smooth move—shown in a martial‑arts sequence that pans laterally as the subject shifts from tree pose into a high kick, with prompts describing camera motion, lighting, and atmosphere start end animation.
NB Pro + Veo 3.1 car spot gets detailed keyframe prompt recipes
Ror_Fly expands the Nano Banana Pro → Veo 3.1 car‑commercial workflow into a very specific three‑step keyframe recipe, sharpening what was previously a more general "generate → animate → stitch" guide for motion designers car keyframes. Creators first design a consistent WRX STI rally car in NB Pro, then feed stills to Veo 3.1 with long, motion‑design style prompts that describe zooms, morphs, gold contour lines and labels drawing on, and engine‑bay reveals, before finishing with Topaz upscaling and Suno music car workflow thread prompt breakdown.

For people doing product or automotive spots, the interesting part is how detailed the Veo prompts are: they specify when text and technical contour lines should fade in and out, how the camera should swing to higher angles, and how to transition from flat collage to intimate realism without janky motion. The thread also suggests iterating by slightly changing keyframe perspectives in NB Pro so Veo has more parallax to work with between shots, which is a practical trick if your first pass feels too flat prompt breakdown.
Gemini API notebook chains NB Pro stills into Veo 3.1 videos
DavidmComfort shares a Jupyter notebook that calls the Gemini API to generate Nano Banana / Nano Banana Pro images and then automatically builds Veo 3.1 videos from those stills, turning a manual NB→Veo workflow into a reproducible scriptable pipeline pipeline description. The current demo takes a single NB‑generated reference frame (also color‑graded by Nano) and spins it into a short animated clip, with plans to bolt on Topaz upscaling, color grading, plus MiniMax and Kling APIs next reference image note future api plans.

For technical creatives, the value is that the whole stack—prompting, image creation, and video generation—lives in code, so you can iterate on prompts, branch variations, and later add automatic evaluation with Gemini 3 to score or describe outputs gemini eval mention. He also notes an intention to wrap this into a web app once the pieces are stable, which would make this kind of start‑from‑still, end‑as‑clip pipeline accessible to non‑notebook users while keeping the Gemini→NB→Veo wiring under the hood web app followup.

Stay first in your field.
No more doomscrolling X. A crisp morning report for entrepreneurs, AI creators, and engineers. Clear updates, time-sensitive offers, and working pipelines that keep you on the cutting edge. We read the firehose and hand-pick what matters so you can act today.
I don’t have time to scroll X all day. Primer does it, filters it, done.
Renee J.
Startup Founder
The fastest way to stay professionally expensive.
Felix B.
AI Animator
AI moves at ‘blink and it’s gone’. Primer is how I don’t blink.
Alex T.
Creative Technologist
Best ROI on ten minutes of my day. I’ve shipped two features purely from their daily prompts.
Marta S.
Product Designer
From release noise to a working workflow in 15 minutes.
Viktor H
AI Artist
It’s the only digest that explains why a release matters and shows how to use it—same page, same morning.
Priya R.
Startup Founder
Stay professionally expensive
Make the right move sooner
Ship a product