Luma is rolling out Uni-1 as a reference-driven image model built around intelligence, directability and cultural taste, with examples spanning sketch conversion and multi-image blends. Use it when references matter more than giant text prompts.

Uni-1 is now live as an image model on Luma's site, with the product page describing it as a multimodal system for image generation, editing, and reference-grounded control. Luma's public rollout centers on three claims: the model can infer intent, follow direction from examples instead of dense prompt syntax, and handle culturally specific visual styles with fewer misses.
The most concrete change for creators is how much weight references appear to carry. In Luma's examples, two portraits become a single cinematic saloon scene; a crystal bust plus a luxury interior become a full-body fashion image in the same material language; and two faces plus a sword-fight frame become a staged medieval duel. Another example turns annotated concept art into polished character renders, which is a stronger promise than simple style transfer.
The rollout examples imply two practical use cases. First, Uni-1 looks built for art direction by assembly: bring separate references for subject, setting, pose, and finish, then let the model reconcile them. Second, it looks useful for sketch-to-image and look-dev work, where the input is less a finished prompt than a packet of visual constraints.
That lines up with early usage from DreamLabLA's network, where Uni-1-made keyframes were turned into a finished short Liminal Memories. The same thread context says the model was also used to generate consistent PBR, normal, and displacement maps, then paired with triplanar and image projection workflows to speed asset creation without dropping quality.
Luma launched Agents for creative work, with creator tests focused on keeping characters, lighting and environments coherent across multi-scene sequences. Use it to cut file juggling and lock image generation to Uni-1 when you need tighter control.
updateOpenAI has removed the Sora app as creators and Hacker News users debate whether novelty never turned into durable usage. Save projects now and plan to test ChatGPT-integrated or rival video tools next.
updateCapCut is expanding Dreamina Seedance 2.0 while Topview restored access within 24 hours, and creators are stress-testing it for vertical repurposing, long prompts and stylized start frames. Try it for fast video conversions, but budget cleanup passes for continuity and transitions.
promptCreators are turning Nano Banana 2 templates into reusable prompt systems for merch shots, sports ads, editorial portraits and modular scene builds. Keep the scaffold fixed and swap only brand, lens, action or environment variables to iterate fast.
workflowRiverside's Co-Creator reads transcripts automatically and turns chat-style requests into cuts, captions, thumbnails and social copy from one workspace. Use it when you need fast repurposing without timeline scrubbing, then polish the output by hand.
Uni-1 is intelligent. Understanding is the first step, it starts with intent. This is what intelligent generation looks like. Try today → lumalabs.ai/uni-1
Uni-1 is directable. It is a creative collaborator. References, intent, and direction replace complex prompts because Uni-1 actually understands the work. Try today → lumalabs.ai/uni-1
Uni-1 is cultured. Intelligence without taste is just output. Uni-1 brings aesthetic understanding and visual judgment trained directly into the model. Try today → lumalabs.ai/uni-1
Liminal Memories: Things I Think I Remember Ep. 1 - Summer, 1996 by Keith Paciello Keyframes made with Uni-1 by @LumaLabsAI