Retake video model edits 20s clips at $0.10/s – in‑shot directing hits production feature image for Wed, Nov 26, 2025

Retake video model edits 20s clips at $0.10/s – in‑shot directing hits production

Executive Summary

Retake finally left the lab today, and your feed shows why it matters. LTX’s new model is live inside LTX Studio and on fal, Replicate, Weavy, Runware, RunDiffusion, and ElevenLabs, letting you re‑direct performances inside a finished shot instead of nuking the whole clip. Replicate’s endpoint takes uploads up to ~20 seconds and 100 MB, outputs 1080p, and charges about $0.10 per second of input, so “one more take” becomes a prompt, not a reshoot.

Creators are treating it like Photoshop for video. Techhalla’s tutorials show a door’s material fixed mid‑scene and a single wine glass recolored via a tiny prompt bar and time‑range selector, while LTX’s own reels swap line reads, facial expressions, even a rogue penguin at 0:02 without touching timing or background. Under the hood, hosts expose separate knobs for emotion, dialogue, camera motion, and props, turning a locked cut into something closer to a tweakable 3D scene.

Zooming out, this lines up neatly with fal’s Lucy Edit Fast, which is doing localized 720p video edits in about 10 seconds at $0.04 per second. The center of gravity is moving from “regenerate the shot” to “surgically patch what’s wrong,” which is exactly where professional workflows live.

Top links today

Feature Spotlight

Retake day‑0: in‑shot directing goes mainstream

Directable video goes wide: LTX Retake lands on fal, Replicate, Runware and Studio with creator guides—edit acting, dialogue, and framing within the same shot, no full re‑render.

LTX’s Retake is everywhere in today’s feed: creators show re‑performing lines, reframing emotion, and fixing continuity inside the same shot. Multiple hosts went live and detailed guides landed.

Jump to Retake day‑0: in‑shot directing goes mainstream topics

Table of Contents

🎬 Retake day‑0: in‑shot directing goes mainstream

LTX’s Retake is everywhere in today’s feed: creators show re‑performing lines, reframing emotion, and fixing continuity inside the same shot. Multiple hosts went live and detailed guides landed.

Retake launches as an in‑shot directing model across Studio, fal, Replicate and Runware

Lightricks and LTX Studio’s new Retake model is out of the lab and already running both inside LTX Studio and on multiple infra hosts, giving creatives a way to re‑direct performances inside the same rendered shot instead of re‑generating full clips. LTX positions it around three core actions—rephrasing dialogue, reshaping emotion, and reframing moments after a video is rendered. feature overview

An official partner list confirms Retake is live in LTX Studio itself and via fal, Replicate, Weavy, Runware, RunDiffusion, and ElevenLabs, so you can hit it either through a polished UI or direct APIs depending on your workflow. partner rollout fal announces Retake as "true directorial control" with promptable dialogue changes and partial shot remixes, aimed at teams iterating narrative or branding without nuking the underlying take. fal hosting Replicate exposes it as a hosted model with uploads up to ~20 seconds, 100 MB, and 1080p output, charging around $0.10 per second of input and documenting how to target specific time ranges and attributes like emotion or camera motion rather than regenerating the whole clip. (replicate launch, product specs) Runware’s D0 drop adds another option with API knobs to alter camera angle, script, audio, or on‑screen action independently, pitched at teams that want a controllable post‑generation pass over already‑approved shots. runware api For you as a filmmaker, editor, or motion designer, the change is that Retake treats a finished clip more like a 3D scene: you can nudge performance, pacing, or framing while preserving the original motion and sound, and you can do it from whatever stack you already use—LTX’s own Studio, a Replicate or fal pipeline, or Runware’s API if you’re embedding this in tools.

Creators show Retake fixing dialogue, props, and continuity inside a single shot

Early users are already treating Retake like "Photoshop for video," using prompts to surgically fix shots instead of re‑cutting or re‑rendering entire sequences. One walkthrough starts from a messy house clip, then dials it up into a full fire scene, adjusts stormy lighting, and even has an actor re‑perform a slap—all from the same base footage—framing it as "hands down the best video editing model out there right now" for promptable fixes. photoshop explainer

Techhalla’s tutorial goes deeper on continuity work: they use Retake to fix a single door’s material mid‑scene and to subtly recolor a wine glass, with the interface showing a small prompt bar and a time‑range selector so only the targeted region changes while the rest of the shot stays locked. door fix demo

Retake UI door prompt


LTX’s own demo reel echoes this pattern, showing A/B clips where the actor’s expression and gesture change while timing and background remain identical, which is exactly what you need when a performance note comes in after picture lock. feature overview Even the jokey "rogue penguin at 0:02" clip from LTX highlights the same thing: fix that one stray artifact, keep the rest of the spot—and your budget—intact. penguin gag For working creatives, the takeaway is that Retake isn’t another "generate a whole new video" toy; it’s behaving like a surgical grade, in‑place editor: trim an awkward line read, calm or intensify an expression, clean up a prop, or remove a background glitch, all without asking the model to reinvent the shot. That makes it a realistic candidate for late‑stage tweaks on client work where continuity and timing are non‑negotiable but you still want AI’s flexibility.


🚗 Keyframes, poses, and start–end shots

Excludes Retake (covered as the feature). Creators chain NB Pro with Veo/Kling and use Flux 2 pose control on Higgs to storyboard precise motion, then animate with start/end frames.

Flux 2 pose mannequins on Higgs turn into Kling 2.5 start–end animations

Techhalla lays out a full Higgsfield workflow where you generate a poseable mannequin with Flux 2, reuse it to define multiple poses, transfer those poses onto your own photo, and then animate between start and end frames with Kling 2.5 pose control guide start end animation. It’s aimed squarely at solo filmmakers and UGC creators who want precise control over character body language while still moving fast.

Martial arts start pose

The process runs in stages: first you create a neutral "pose control" mannequin (optionally giving it a few traits), then you regenerate it in different stances while keeping style consistent, effectively building a pose library you can reuse across shoots mannequin prompt. Next, you feed Higgs both your original portrait and a mannequin pose, prompting Flux 2 to "transfer" that body position while preserving your identity and outfit pose transfer step. There’s even a reverse‑engineering trick: grab a still from any reference video and have Flux 2 rebuild a mannequin in that exact pose, so you can mimic choreography or iconic frames throughout a sequence reverse pose trick. Finally, you hand two frames (start and end pose) to Kling 2.5 in Higgs and let it interpolate a smooth move—shown in a martial‑arts sequence that pans laterally as the subject shifts from tree pose into a high kick, with prompts describing camera motion, lighting, and atmosphere start end animation.

NB Pro + Veo 3.1 car spot gets detailed keyframe prompt recipes

Ror_Fly expands the Nano Banana Pro → Veo 3.1 car‑commercial workflow into a very specific three‑step keyframe recipe, sharpening what was previously a more general "generate → animate → stitch" guide for motion designers car keyframes. Creators first design a consistent WRX STI rally car in NB Pro, then feed stills to Veo 3.1 with long, motion‑design style prompts that describe zooms, morphs, gold contour lines and labels drawing on, and engine‑bay reveals, before finishing with Topaz upscaling and Suno music car workflow thread prompt breakdown.

WRX STI keyframe stills

For people doing product or automotive spots, the interesting part is how detailed the Veo prompts are: they specify when text and technical contour lines should fade in and out, how the camera should swing to higher angles, and how to transition from flat collage to intimate realism without janky motion. The thread also suggests iterating by slightly changing keyframe perspectives in NB Pro so Veo has more parallax to work with between shots, which is a practical trick if your first pass feels too flat prompt breakdown.

Gemini API notebook chains NB Pro stills into Veo 3.1 videos

DavidmComfort shares a Jupyter notebook that calls the Gemini API to generate Nano Banana / Nano Banana Pro images and then automatically builds Veo 3.1 videos from those stills, turning a manual NB→Veo workflow into a reproducible scriptable pipeline pipeline description. The current demo takes a single NB‑generated reference frame (also color‑graded by Nano) and spins it into a short animated clip, with plans to bolt on Topaz upscaling, color grading, plus MiniMax and Kling APIs next reference image note future api plans.

NB Pro reference frame

For technical creatives, the value is that the whole stack—prompting, image creation, and video generation—lives in code, so you can iterate on prompts, branch variations, and later add automatic evaluation with Gemini 3 to score or describe outputs gemini eval mention. He also notes an intention to wrap this into a web app once the pieces are stable, which would make this kind of start‑from‑still, end‑as‑clip pipeline accessible to non‑notebook users while keeping the Gemini→NB→Veo wiring under the hood web app followup.


Stay first in your field.

No more doomscrolling X. A crisp morning report for entrepreneurs, AI creators, and engineers. Clear updates, time-sensitive offers, and working pipelines that keep you on the cutting edge. We read the firehose and hand-pick what matters so you can act today.

I don’t have time to scroll X all day. Primer does it, filters it, done.

Renee J.

Startup Founder

The fastest way to stay professionally expensive.

Felix B.

AI Animator

AI moves at ‘blink and it’s gone’. Primer is how I don’t blink.

Alex T.

Creative Technologist

Best ROI on ten minutes of my day. I’ve shipped two features purely from their daily prompts.

Marta S.

Product Designer

From release noise to a working workflow in 15 minutes.

Viktor H

AI Artist

It’s the only digest that explains why a release matters and shows how to use it—same page, same morning.

Priya R.

Startup Founder

Stay professionally expensive

Make the right move sooner

Ship a product

WebEmailTelegram

On this page

Executive Summary
Feature Spotlight: Retake day‑0: in‑shot directing goes mainstream
🎬 Retake day‑0: in‑shot directing goes mainstream
Retake launches as an in‑shot directing model across Studio, fal, Replicate and Runware
Creators show Retake fixing dialogue, props, and continuity inside a single shot
🚗 Keyframes, poses, and start–end shots
Flux 2 pose mannequins on Higgs turn into Kling 2.5 start–end animations
NB Pro + Veo 3.1 car spot gets detailed keyframe prompt recipes
Gemini API notebook chains NB Pro stills into Veo 3.1 videos
🧩 FLUX.2 builder extras: Tiny AE + LoRA gallery
fal open-sources FLUX.2 Tiny AutoEncoder for 20× faster streaming previews
fal launches FLUX.2 LoRA Gallery with add‑background and virtual try‑on recipes
WaveSpeedAI exposes FLUX.2 [dev] with REST API and $0.012 per image pricing
Picsart Flows adds FLUX.2 with a limited free credit window for creators
🍌 NB Pro creative recipes and tests
Hyper-detailed NB Pro prompt recreates a Temptation Island 4-panel broadcast
NB Pro doodle-edit workflow turns scribbles into finished portraits
“Droste effect” prompt gives NB Pro instant recursive imagery
Side-by-side grids compare NB Pro and FLUX.2 on real-world photo prompts
Dice test shows NB Pro still struggles with exact numeric constraints
🎨 Reusable looks: srefs and prompt packs
“Plant sculptures” prompt pack turns any subject into floral statuary
Midjourney sref 1645061490 nails modern realistic comic noir
MJ sref 8523380552 expands from rainy nights to portraits and surreal sets
Midjourney V7 grid recipe with sref 607976961 for soft storybook fantasy
🛠️ Speed stacks: faster T2I, localized edits, templates
Lucy Edit Fast on fal brings 10s localized video edits at $0.04/s
fal launches Z-Image Turbo day‑0 with ~1s open‑source text‑to‑image
Topaz upscale and interpolation models land in ComfyUI workflows
Hedra Templates launch with 20+ presets and 2,500‑credit giveaway
🕹️ Infinite pixel art with NB Pro
NB Pro workflow turns one character into 25 enemies and full game scenes
Free converter snaps NB Pro images into true pixel art
Retro Diffusion now makes full 8‑direction NB Pro sprite sets
🗣️ Voice agents hit real ops
Deliveroo deploys ElevenLabs Agents with strong real-world re‑engagement
📊 Model watch: efficiency and eval spats
Early Gemini 3 tests: elite debugger, shaky at one‑shot apps
Luma’s Terminal Velocity Matching targets diffusion quality with 25× fewer steps
Isaac 0.1: 2B grounded VLM with strong OCR lands on Replicate
Kimi K2 Thinking scores 67 on AA Index with only 4B params
Memori 3.0: 80% token savings for LLM apps via SQL caching
DeepWriter tops Humanity’s Last Exam for agentic AI systems
Grok‑4 leads new election‑alignment leaderboard on 8 countries
OpenRouter side‑by‑side prompts highlight GPT‑3.5 → GPT‑5 gains
🏢 AI at work: jobs, scribes, and time saved
HP to cut 4–6k jobs by 2028 to fund AI push
Philly hospitals report AI scribes cut doctor note time by up to 30%
🛍️ Last‑minute Black Friday for creatives
Higgsfield’s 65%‑off unlimited image year enters final hours
Pictory’s BFCM deal: 50% off annual plans plus bonus AI credits
📺 Live sessions and head‑to‑heads
Live Nano Banana Pro vs FLUX.2 showdown on AI Slop Review
Freepik hosts live KeanuVisuals session on epic AI transitions
Pictory and AppDirect webinar turns slides into AI video lessons
Midjourney Office Hours returns with David leading this week’s Q&A