Firefly opened Custom Models beta to everyone, letting creators train on their own images for consistent styles and recurring characters. Brands and filmmakers can keep visual assets on-model across image generation.

Firefly Custom Models is now in open beta, letting users upload their own images and train a model around a specific visual language or character. In the announcement, Adobe positions it as a way to preserve consistency rather than start from a generic house style, and the attached [vid:0|launch demo] shows fast cuts of abstract, photographic, and graphic outputs under the “Custom Models” beta branding.
That positioning matters for creative teams that need repeatable looks. A supporting repost from the Adobe Firefly share describes the workflow as uploading assets so Firefly can learn a “unique style,” while the training example explicitly calls out photo styles, illustrations, and characters as the main categories.
The strongest use case in the evidence is style transfer from a creator’s own archive into new concept work. In Kashtanova’s example, the trained model is based on their photography and used to generate images for a film pipeline, suggesting Custom Models is less about one-off prompts and more about extending an existing body of work.
Community posts also point to narrower visual signatures. The shared image shows a cougar portrait rendered with intense magenta-and-amber rim lighting against a flat purple background, the kind of repeatable color treatment and subject styling that brands, editorial artists, and pitch-deck makers usually have to brute-force by prompting or compositing. Even smaller replies like this thread starter center on the training step itself, which suggests the workflow, not just the final image, is what creators are testing first.
Promotional posts around Higgsfield Original Series say Arena Zero licensed a 22-year-old bartender's face in a seven-figure deal. Treat the figure as unverified, but watch this as AI-native series test likeness licensing as a casting model.
releaseTopview added Seedance 2.0 to Agent V2, pairing multi-scene generation with a storyboard timeline and Business Annual access billed as 365 days of unlimited generations. That moves longform video workflows toward editable sequences instead of stitched clips.
workflowCreators are moving from V8 calibration complaints to darker film-still scenes, fashion shots, and worldbuilding tests, with ECLIPTIC remakes showing stronger depth and lighting. Retest saved SREF recipes if you rely on V8 for cinematic ideation.
workflowA shared workflow converts GTA-style stills into photoreal images with Nano Banana 2, then animates them in LTX-2.3 Pro 4K using detailed material, skin, vehicle, and camera prompts. Try it for trailer-style previsualization if you want more control at lower cost.
workflowShared Nano Banana 2 workflows now cover turnaround sheets, distinctive facial traits, and photoreal rerenders that keep the framing of a reference image. Use one prompt grammar for concept art, editorial portraits, and animation prep.
We just released Firefly Custom Models (beta) to everyone 🎉 You can train a model on your own images to keep a consistent style or character! I trained a model on my photography and generated images that would be impossible to get any other way for my upcoming film.
Train your own custom model (beta) in @AdobeFirefly. Photo style, illustrations, & characters. I fell in love w/the Penguin Girl that’s so popular on the Internet. I wanted my very own version. I’ve previously spent hours creating and prompting using Nano Banana 2 in Boards. Show more