Creators kept testing Grok Imagine with multi-reference anime prompts and extended clips, but users also reported a persistent double-exposure artifact across generations. Use it for exploration, then rerun critical shots elsewhere until the bug clears.

The strongest creator use case in this batch is controlled visual synthesis. In Artedeingenio's test, multiple reference images are used to lock Grok Imagine into a classic 1980s OVA anime treatment, with the output staying close to the same palette, character design, and cel-animation feel across several generations. That makes the feature more useful for look development than one-off concept art.
A second experiment pushes the tool toward IP-adjacent mashup prototyping. In another demo, cartoon-character references are mixed into new scenes, then extended into motion, producing a short sequence that morphs from a Buzz Lightyear-like figure into an original creature. The creative takeaway is less about exact character fidelity and more about using reference stacks to invent hybrid designs that can survive into animation.
The current blocker is image integrity. According to bennash's report, Grok Imagine is still producing a double-exposure or ghosting artifact across repeated generations, and the complaint says the problem has persisted for more than a week. The attached example shows the issue appearing even on a basic prompt for a cat wearing a tiny crown, which suggests the bug is broader than edge-case multi-reference setups.
That makes the new reference controls harder to trust for shots that need clean final frames. Even the repost highlighting the update, while positive about the model's jump in quality, frames the change as recent enough that creators are still discovering where the gains hold and where outputs fall apart.
Creators showed Grok Imagine generating a still on phone, auto-animating it, and extending the clip after the first 10 seconds. Try it for fast social video prototypes when you want image-to-video without leaving mobile.
releaseTopview added Seedance 2.0 to Agent V2, pairing multi-scene generation with a storyboard timeline and Business Annual access billed as 365 days of unlimited generations. That moves longform video workflows toward editable sequences instead of stitched clips.
workflowCreators are moving from V8 calibration complaints to darker film-still scenes, fashion shots, and worldbuilding tests, with ECLIPTIC remakes showing stronger depth and lighting. Retest saved SREF recipes if you rely on V8 for cinematic ideation.
workflowA shared workflow converts GTA-style stills into photoreal images with Nano Banana 2, then animates them in LTX-2.3 Pro 4K using detailed material, skin, vehicle, and camera prompts. Try it for trailer-style previsualization if you want more control at lower cost.
workflowShared Nano Banana 2 workflows now cover turnaround sheets, distinctive facial traits, and photoreal rerenders that keep the framing of a reference image. Use one prompt grammar for concept art, editorial portraits, and animation prep.
It’s wonderful to use multiple reference images in Grok Imagine. It gives you many creative opportunities. With this classic 80s OVA anime style, it works great.
Mixing different cartoon characters to create completely new scenes with Grok Imagine is really fun. And if you extend the video, you can achieve animations worthy of a studio like Disney Pixar.
sigh, @grok imagine is still buggy with this unwanted double exposure that's in half the generations right now. It's been like this for a week plus.