FLUX.2 launches with 32B params, 4MP edits – live on 15 platforms feature image for Tue, Nov 25, 2025

FLUX.2 launches with 32B params, 4MP edits – live on 15 platforms

Executive Summary

Black Forest Labs’ FLUX.2 landed today and, unlike most model drops, the ecosystem shipped with it. The 32B‑param visual model brings 4MP editing, up to 10 reference images, HEX‑accurate color, and JSON prompting, plus an open‑weights [dev] checkpoint. That combo—production‑grade typography and strong world logic you can actually self‑host—is why we’re suddenly seeing it everywhere, not in six months.

On day zero, Replicate, Runware, fal, Vercel’s AI Gateway, Cloudflare Workers AI, OpenRouter, Poe, and ComfyUI all wired it in. Replicate is quoting ~2.5–40s latencies depending on [dev]/[pro]/[flex], while ComfyUI’s FP8 build cuts VRAM to ~15GB so 4MP flows fit on a single RTX card. fal shipped LoRA trainers for product and identity tuning; Runware went with per‑image pricing and a Sonic engine that claims up to 7× faster, 4× cheaper [dev] vs [flex].

On the creator side, Freepik, Leonardo, LTX Studio, ElevenLabs, OpenArt, Higgsfield, and Krea are already treating FLUX.2 as a core backbone alongside Nano Banana Pro. Early tests back up the split we’ve been tracking all week: NB Pro for gritty realism and likeness, FLUX.2 when you care about layout, legible text, and color‑tight campaigns. If you make visuals for a living, this is the first image model rivalry that really moves your routing choices—and we help creators ship faster by paying attention to that, not leaderboard drama.

Top links today

Feature Spotlight

FLUX.2 day‑0 ecosystem rollout

FLUX.2 lands everywhere at once: open weights + pro stacks across Replicate, ComfyUI, fal trainers, Freepik, ElevenLabs, LTX, Vercel/Cloudflare, Runware. It’s the new default image model for production pipelines.

Cross‑platform, creator‑ready release of Black Forest Labs’ FLUX.2 with 4MP editing, multi‑reference control, HEX/JSON prompting, and open weights. Massive partner adoption today across hosts, tools, and runtimes.

Jump to FLUX.2 day‑0 ecosystem rollout topics

Table of Contents

🖼️ FLUX.2 day‑0 ecosystem rollout

Cross‑platform, creator‑ready release of Black Forest Labs’ FLUX.2 with 4MP editing, multi‑reference control, HEX/JSON prompting, and open weights. Massive partner adoption today across hosts, tools, and runtimes.

ComfyUI ships day‑0 FLUX.2 with FP8 dev build and 4MP multi‑ref workflows

ComfyUI added FLUX.2 [dev] on day zero, with turnkey nodes for 4MP photorealism, 10‑image multi‑reference control, and enhanced text rendering in both Comfy Cloud and local installs. comfy release They quickly followed with an FP8 build co‑developed with NVIDIA that cuts VRAM needs in half to around 15GB, making full‑res FLUX.2 workflows feasible on a single consumer RTX card instead of datacenter GPUs. fp8 vram note The Comfy team is leaning into FLUX.2 as a new “frontier” backbone, showing node graphs for multi‑reference consistency, 4MP editing, and professional‑grade color and typography control. multi ref demo Their blog walks through best‑practice graphs and mentions GGUF‑quantized FLUX.2‑dev on Hugging Face for people who want even lighter local setups. comfy flux guide For artists already living in Comfy, this turns FLUX.2 into a first‑class citizen rather than just another external API.

fal hosts FLUX.2 and launches LoRA trainers plus free credit promos

fal brought FLUX.2 to its platform on day zero, exposing [pro], [flex], and [dev] variants with support for HEX color codes, JSON prompts, and up to 10 reference images in a single call. fal launch brief On top of raw inference, they shipped dedicated FLUX.2 [dev] trainers for both text‑to‑image and image‑edit LoRAs, so you can teach it products, identities, or domain‑specific transformations without retraining the base model. lora trainer post fal is clearly trying to seed experimentation: they’re offering a $5 coupon code "FLUX2PRODEV" for the first 1,000 users and 100 free FLUX.2 runs in their Sandbox interface. coupon announcement Their trainer docs spell out dataset expectations (15–50 pairs at ≥1024×1024) and cost formulas, making it approachable for small studios who want a house style or brand LoRA but don’t want to touch raw training scripts. trainer docs

Freepik enables Unlimited FLUX.2 Pro and a Flex tier for precision work

Freepik is now an official FLUX.2 launch partner, giving Premium+ and Pro users unlimited access to FLUX.2 [Pro], while FLUX.2 [Flex] runs on credits for higher‑fidelity, text‑heavy jobs. freepik launch The promo slots FLUX.2 alongside Nano Banana Pro as a top‑tier model in their suite, with creators already stress‑testing multi‑reference outfits, cinematic portraits, and magazine‑style typography inside Spaces. creator fashion test

Threads from Freepik and power users show that [Pro] is tuned for speed and general photorealism, whereas [Flex] excels at legible text, fine details, and tightly controlled palettes using HEX codes. pro flex breakdown There are some early rough edges—like failed generations and quirks when referencing inline images—but the team is actively collecting feedback and iterating in real time. bug report

LTX Studio becomes a FLUX.2 launch partner with a deep 7‑part guide

LTX Studio is an official FLUX.2 launch partner and published a 7‑tweet guide on how they’re wiring it into story‑driven workflows, from base image quality to color accuracy, editing, and infographics. ltx launch thread They expose both Flex (higher quality) and Pro (faster) up to 2K resolution, emphasizing consistent hands, faces, and textures for campaigns and product visuals. image quality note

The thread is packed with production‑grade prompts—multi‑animal studio scenes, hex‑locked fashion collages, Burj Khalifa infographics with clean labels—and shows how FLUX.2 handles multi‑reference inputs for brand‑consistent shots. infographic prompt LTX couples the launch with a 40% off yearly promo, clearly betting that Flux‑backed stills will feed directly into its AI pre‑viz and video tooling rather than staying siloed. guide and discount

ElevenLabs bakes FLUX.2 into its Image & Video pipeline

ElevenLabs added FLUX.2 as an image backbone inside its Creative Platform, so you can now generate and edit 4MP stills with multi‑reference control and then immediately animate or score them using the existing audio stack. elevenlabs flux launch That means a single tool can now take you from prompt → character sheet → voiced, lip‑synced video without hopping between half a dozen apps.

Their Image & Video page positions FLUX.2 alongside Veo, Sora, and Wan, with Studio 3.0 handling captions, voiceover, and timeline‑level edits on top of FLUX.2‑generated imagery. image video page For storytellers, this turns ElevenLabs from "audio add‑on" into a place where the visuals themselves are frontier‑grade instead of an afterthought.

OpenArt offers two weeks of Unlimited FLUX.2 Pro for Wonder users

OpenArt has brought FLUX.2 to its platform with both Pro (speed) and Flex (precision) variants, and is giving Wonder users two weeks of unlimited FLUX.2 Pro as part of a Black Friday promo. openart launch They’re pairing that with 60% off annual Wonder plans, clearly trying to hook artists while FLUX.2 is still new.

Creators testing inside OpenArt highlight Flux 2 as "insanely smart" at style awareness and character consistency, especially when prompts follow a clear Subject + Action + Style + Context structure. use case thread Early sentiment from power users is that Flux 2 on OpenArt feels like an always‑on, unlimited playground rather than a meter‑ticking API, which is great for building out big libraries of looks and prompts quickly. creator reaction

Runware launches D0 FLUX.2 with Sonic Inference and per‑image pricing

Runware went live as a day‑zero FLUX.2 partner, offering [dev], [pro], and [flex] variants behind their Sonic Inference Engine®, which they claim makes [dev] up to 7× faster and cheaper than [flex] on their stack. runware flux drop Unlike per‑megapixel billing, Runware prices per image, and says you can save up to 4× versus competitors by not paying separately for inputs and outputs. pricing explanation

They position [dev] as the best open‑weight checkpoint for both text‑to‑image and editing with multi‑image inputs, [pro] as a fast, prompt‑faithful option with up to eight references, and [flex] as a typography and detail beast handling up to 10 references. dev variant note For teams building user‑facing tools, the per‑image simplicity plus Sonic’s latency gains make FLUX.2 feel more like a SaaS primitive than a science project.

Vercel, Cloudflare, OpenRouter and Poe expose FLUX.2 as a routed endpoint

On the infra side, FLUX.2 is already showing up in the places developers actually route traffic: Vercel’s AI Gateway now lists FLUX.2 Pro as a first‑class text‑to‑image target for production apps, vercel gateway note Cloudflare Workers AI is hosting FLUX.2 [dev] for serverless use, workers ai post and OpenRouter has added the model to its catalog so you can hit it via a unified API. openrouter listing Poe also surfaced FLUX.2 inside its chat interface, making it accessible to non‑dev creators who live in messaging‑style UIs. poe integration This matters less for image hobbyists and more for teams that want to A/B FLUX.2 against existing models behind a single routing layer, or ship it in browsers and edge functions without babysitting their own GPU fleet.

FLUX.2 demos highlight hyperrealism, typography, and style transfers across hosts

Across Replicate, Runware, Leonardo, and LTX, the most compelling FLUX.2 demos today cluster around three use cases creatives care about: 4MP hyperreal portraits and products, in‑scene text that actually reads, and style transfer that respects composition. Replicate and Runware lean into sharp product shots and candy‑land scenes; Leonard and LTX show magazine covers, infographics, and Burj Khalifa diagrams with clean labels and neat grids. hyperrealism demo

Curtains hex example

For filmmakers and storytellers, this all matters because it means one model can handle both key art (poster‑grade characters, logos) and practical things like deck slides and app mocks—without juggling SDXL for photos, a separate font pipeline, and a janky text‑overlay hack. infographic prompt The consistency of style across sequences, especially when you feed in multiple references, is what makes FLUX.2 feel ecosystem‑ready rather than just a fun toy.

Krea integrates Flux 2 with 10‑image inputs and native editing

Krea announced support for Flux 2, highlighting two key capabilities for visual builders: up to 10 image inputs at once and native image editing powered by the new model. krea flux announcement That pairs neatly with Krea’s real‑time, brush‑driven UI, so you can anchor poses, styles, or products from multiple references and then iterate in place instead of bouncing assets between tools.

They’re also hosting an "Infra Talks" event with Chroma and Daydream Live, where Flux 2 is part of a broader conversation about AI infra and search, infra event mention which is a good sign that Krea plans to treat Flux 2 as a long‑term backbone rather than a weekend experiment.


🎬 Shot‑to‑shot workflows: Kling, Veo, ImagineArt

Practical pipelines for creators: start–end frames, node graphs, and keyframes to turn stills into smooth sequences. Excludes FLUX.2 model news (see feature).

ImagineArt nodes turn NB Pro stills into a looping Kling 2.1 reveal

Techhalla shared a full ImagineArt Workflows graph that chains Nano Banana Pro images with Kling 2.1 Pro video to build a seamless looping "alien in my head" reveal shot. You start from a single portrait, use NB Pro to generate a 16:9 BEFORE/AFTER split, then automatically crop each panel to 1:1 and feed them as start and end frames into two Kling 2.1 nodes, wiring outputs so clip A opens the skull and clip B closes it, forming a perfect loop when placed back-to-back. Workflow overview

The screenshots show exactly how to wire the Upload → Prompt → Image → Prompt → Image → Prompt → Video chain in ImagineArt, including the long instruction prompt that makes NB Pro infer a plausible personality, generate a realistic workshop background, and render the alien cockpit as a physical prop rather than pure VFX. (Node graph setup, Long prompt example)

Before and after frame isolation

For motion, the Kling prompts are kept short and focus on "hyperrealistic, smooth mechanical motion" and "restoring the human appearance" while ImagineArt passes the first image as a clip’s starting frame and the second as its ending frame, which is why the animation sticks tightly to the stills instead of drifting off‑model. Kling prompt wiring For anyone building short loops or reels, this pattern is a very reusable template: any wild NB Pro transformation you can express side‑by‑side can become a reversible two‑clip loop with almost no manual editing, and the node graph makes it trivial to swap in new faces, props, or environments while keeping the structure intact. Workflow call to action

Leonardo’s ‘Primal Glow’ ad shows NB Pro‑driven storyboard‑to‑spot workflow

Leonardo AI published a behind‑the‑scenes breakdown of its "Primal Glow" launch ad for Nano Banana Pro, showing how you can go from idea to a finished, globally localised spot in a few hours using only AI tools. They concepted and storyboarded the whole piece in about 1.5 hours, locking the visual language by hard‑coding a specific film stock term—Ektachrome—into every prompt so the color and lighting stayed coherent across wildly different shots. Primal glow workflow

Once the visual grammar was stable, they leaned on NB Pro’s world knowledge for localisation instead of hand‑translating layouts: by prompting scenes like a Tokyo subway or a Berlin bus stop, the model automatically rendered ad copy in Japanese or German typography that matched the environment, turning a single core campaign into multiple region‑specific variants with no extra design passes. Primal glow workflow The team also shared two practical rules of thumb: treat consistency as a prompt problem, not a post problem (film stock, lens, and lighting baked into every call), and keep a small, curated set of prompts rather than a huge spreadsheet so your sequence feels like one director shot it. They’re DM‑ing the full guide to people who comment “glow,” but the Twitter clips already show how close this gets to a modern beauty or tech commercial without a physical shoot. Guide dm mention If you’re doing brand work, this workflow is a pattern: storyboard and tone‑lock in Leonardo with NB Pro, then only after that worry about timing, sound design, and final cut in your usual editor.

NB Pro plus Hailuo 2.x start–end frames land cinematic tension

Creators are leaning on Hailuo 2.0/2.3/2.5 Turbo’s start–end frame support to turn Nano Banana Pro stills into tense, cinematic sequences that actually feel like scenes rather than morph tests. In one example, a gritty action‑thriller shot—NB Pro stills of a climber in an industrial structure—is used as opening and closing frames for Hailuo 2.0, which then fills the middle with shaky‑cam movement, cable whips, and white‑flash cuts that stay on‑model. Action thriller test

Other experiments push the same pattern into stylised spaces: Half‑Life 2 behind‑the‑scenes shots, Titanic set recreations, and anime‑style RPG attacks, all driven from NB Pro stills and expanded into 10–20 second moves with dramatic lighting, dust, and camera drift that match the original frame. (Half life test, Titanic bts test, Anime battle test) Hailuo is also promoting Nano Banana 2 as “LIVE and UNLIMITED” inside its Agent surface, which means these start–end workflows are effectively free to iterate for now; you can keep regenerating until the motion fits your cut without worrying about per‑clip model fees. Unlimited hailuo agent For filmmakers and game artists, the key takeaway is that Hailuo 2.x is no longer just a text‑to‑video toy: if you arrive with strong NB Pro keyframes, you can rough out full coverage for a beat—wide to close, idle to impact—while preserving character and lighting continuity closely enough to drop into animatics or even short‑form spots.

Flux 2 plus Veo 3.1 in ImagineArt power drawing timelapses and transitions

In a separate ImagineArt tutorial, Techhalla shows how to pair Flux 2 stills with Veo 3.1 Fast using start–end frames and even JSON prompts to build drawing‑style timelapses and more complex transitions. The workflow: first, generate your character or scene frames in Flux 2 inside ImagineArt, including clever "reverse engineering" prompts that ask the model to partially erase and then sketch back in the subject, which gives you clean progression beats. Flux transitions overview

Those frames then become Veo 3.1’s anchors. For simple draw‑in effects you give Veo an initial blankish or rough version as the start frame and the finished illustration as the end frame; Veo 3.1 Fast handles the in‑between strokes and camera drift, producing an 8–10 second clip that looks like a sped‑up painting session. Drawing timelapse demo

For more advanced motion—like pushing through a game UI into a live‑action reveal—the tutorial uses full JSON prompting to describe what should happen between frames (zoom out from CRT, tilt to arcade, rotate around player, etc.), while still binding Veo to explicit start and end images in the node graph. (Json transition example, Workflow wrapup) The point is: you don’t have to abandon still‑image control to get interesting motion. ImagineArt’s node system makes it easy to insert any strong image model up front, then hand off to Veo 3.1 for shot‑to‑shot evolution, which is especially useful if you want a consistent character or layout but different story beats.

Runware outlines NB Pro → Veo 3.1 keyframe pipeline for car spots

Runware and community creator Ror_Fly walked through a practical pipeline that starts with Nano Banana Pro car renders and ends with Veo 3.1‑driven keyframed motion, basically giving you a mini spec commercial without touching a 3D package. The recipe is: generate a set of stills of the car in different poses and locations using NB Pro, pick a handful as style‑consistent hero frames, then feed those into Veo 3.1 as sequential keyframes so it interpolates the motion between them. Car keyframe breakdown

Runware’s own "Reborn through Veo 3.1" teaser leans on the same idea at a higher level: Veo 3.1 sits on top of your favourite image models (including NB Pro and their own FLUX.2 offering) and turns carefully chosen stills into dynamic camera paths, logo reveals, and environmental moves that feel like 3D even though they’re entirely image‑based. Veo promo clip

The important detail for designers is that you’re not leaving everything to Veo: your NB Pro keyframes control composition, reflections, and upgrades (body kits, lighting, decals), while Veo focuses on path, parallax, and motion blur. That means you can iterate on look in the image stage and only render video when the car’s design is locked—far cheaper than rerunning long video generations from text alone. Full process thread For small studios, this is a very achievable “fake 3D” pipeline for auto, product, or sneaker work: NB Pro for art direction, Veo 3.1 for movement, and your usual NLE to cut the beats.

Creator AB‑tests Nano Banana stills across Veo 3.1, Kling 2.5, MiniMax 2.3

David M. Comfort posted a quick screen‑recorded experiment where he takes the same Nano Banana Pro imagery and feeds it into three video models—Veo 3.1, Kling 2.5, and MiniMax 2.3—to compare how each handles motion and adherence to the original frame. The clip shows a desktop packed with terminals and UIs as he runs batches across the different services, highlighting that, while text prompts matter, the video model choice has a huge impact on camera feel, interpolation smoothness, and how hard the model drifts off‑style. Multi model experiment

There’s no formal benchmark yet, but it’s the kind of real‑world AB test filmmakers care about: same NB Pro art, same rough brief, three very different motion signatures. Veo leans into cinematic camera moves, Kling often feels more physical and snappy, and MiniMax offers another flavour again—all of which you can now audition in a single evening using one set of keyframes.


Stay first in your field.

No more doomscrolling X. A crisp morning report for entrepreneurs, AI creators, and engineers. Clear updates, time-sensitive offers, and working pipelines that keep you on the cutting edge. We read the firehose and hand-pick what matters so you can act today.

I don’t have time to scroll X all day. Primer does it, filters it, done.

Renee J.

Startup Founder

The fastest way to stay professionally expensive.

Felix B.

AI Animator

AI moves at ‘blink and it’s gone’. Primer is how I don’t blink.

Alex T.

Creative Technologist

Best ROI on ten minutes of my day. I’ve shipped two features purely from their daily prompts.

Marta S.

Product Designer

From release noise to a working workflow in 15 minutes.

Viktor H

AI Artist

It’s the only digest that explains why a release matters and shows how to use it—same page, same morning.

Priya R.

Startup Founder

Stay professionally expensive

Make the right move sooner

Ship a product

WebEmailTelegram

On this page

Executive Summary
Feature Spotlight: FLUX.2 day‑0 ecosystem rollout
🖼️ FLUX.2 day‑0 ecosystem rollout
ComfyUI ships day‑0 FLUX.2 with FP8 dev build and 4MP multi‑ref workflows
fal hosts FLUX.2 and launches LoRA trainers plus free credit promos
Freepik enables Unlimited FLUX.2 Pro and a Flex tier for precision work
LTX Studio becomes a FLUX.2 launch partner with a deep 7‑part guide
ElevenLabs bakes FLUX.2 into its Image & Video pipeline
OpenArt offers two weeks of Unlimited FLUX.2 Pro for Wonder users
Runware launches D0 FLUX.2 with Sonic Inference and per‑image pricing
Vercel, Cloudflare, OpenRouter and Poe expose FLUX.2 as a routed endpoint
FLUX.2 demos highlight hyperrealism, typography, and style transfers across hosts
Krea integrates Flux 2 with 10‑image inputs and native editing
🎬 Shot‑to‑shot workflows: Kling, Veo, ImagineArt
ImagineArt nodes turn NB Pro stills into a looping Kling 2.1 reveal
Leonardo’s ‘Primal Glow’ ad shows NB Pro‑driven storyboard‑to‑spot workflow
NB Pro plus Hailuo 2.x start–end frames land cinematic tension
Flux 2 plus Veo 3.1 in ImagineArt power drawing timelapses and transitions
Runware outlines NB Pro → Veo 3.1 keyframe pipeline for car spots
Creator AB‑tests Nano Banana stills across Veo 3.1, Kling 2.5, MiniMax 2.3
🧪 NB Pro creator recipes and comparisons
Creators pit Nano Banana Pro against Flux 2 in 11 image tests
NB Pro multi‑scene trick: 4 panels plus instant 3D style swap
One‑ref Nano Banana Pro moodboards turn a single shot into full campaigns
Extreme upscaling test shows 50×50 inputs rescued to clean portraits
Higgsfield reels off 20 Nano Banana Pro use cases in 10 minutes
Miniature studio-in-a-light‑bulb prompt showcases NB Pro scene control
Nano Banana Pro highlighted as a go‑to "pixel clean‑up" tool
NB Pro tested as a style reproducer from sketches to photographic scenes
🎨 Reusable looks: MJ srefs + prompt packs
Midjourney V7 grid recipe with sref 246448893 and high chaos
Atmospheric haze prompt pack for instant cinematic backlight
Modern realistic comic look via MJ sref 2811860944
MJ Style Creator sref 8523380552 nails rainy, cinematic night photography
New MJ sref 7570143073 locks in foggy, monolithic sci‑fi mood
“AI hater” Star Wars doodle style with MJ sref 1214430553
🧊 3D and game‑art pipelines
Tencent Hunyuan 3D Engine goes global for instant game‑ready assets
Nano Banana Pro becomes a pixel‑art and sprite factory for indie games
🎵 Licensed AI music takes shape
Suno and Warner Music shift AI songs to licensed models by 2026
Warner Music and Stability AI partner on licensed pro audio tools
🗣️ Voices, templates, and sync realities
ElevenLabs ships Templates to shortcut avatar, photo-animation, and music workflows
Grok Imagine’s Fun mode leans into goofy, dialog-aware cartoons
Pictory pushes text-to-video with built-in narration for L&D and marketing
Creator argues good lip-sync needs audio and acting generated together
Image-to-lip-sync tools improve, but creators want real video-to-sync models
📈 Benchmarks and memory: what’s actually working
Gemini 3 Pro preview wins Kilo Code’s coding and UI benchmark
BATS makes tool‑using agents budget‑aware instead of blindly calling APIs
EverMemOS brings long‑term, structured memory to open‑source agents
GPT‑5 Pro tops new 400‑story creative writing benchmark
GPT‑5.1 matches GPT‑5 on Epoch index but burns more tokens
PRInTS reward model trains agents that actually read what tools return
🏷️ Black Friday: unlimiteds, credits, and plan cuts
ImagineArt Black Friday: 65% off and a year of unlimited image models
Lovart offers 365 days of unlimited Nano Banana Pro at up to 60% off
LTX Studio Black Friday: 40% off all yearly AI video plans
Pictory BFCM: 50% off annual, 6 months free, and 2400 AI credits
PixVerse Black Friday expands with up to 40% off and new Ultra plan
Magnific AI Black Friday: 50% off all plans including annual
Pollo AI’s Christmas Special offers 30+ festive video templates with free runs
🧰 Comfy Cloud: faster GPUs, custom LoRAs, unified credits
Comfy Cloud upgrades to Blackwell RTX 6000 Pros with 2× A100 speed
Comfy Cloud adds custom LoRA uploads and 1‑hour Pro workflows
Comfy Cloud rolls out unified Comfy Credits and per‑second billing
🗞️ Creator mood: model race and live tests
Nano Banana Pro vs FLUX.2 becomes the main community shootout
Gemini tops US App Store as creators double down on switching from ChatGPT
ImagineArt 1.5 debuts at #3 on global image leaderboard, ahead of Imagen and GPT‑5
Creators meme Claude Opus 4.5’s ‘AGI test fail’ despite strong benchmarks
Creators say Midjourney could stop updating for a year and still keep them