A shared workflow converts GTA-style stills into photoreal images with Nano Banana 2, then animates them in LTX-2.3 Pro 4K using detailed material, skin, vehicle, and camera prompts. Try it for trailer-style previsualization if you want more control at lower cost.

The workflow starts with a reference image and a long Nano Banana 2 prompt designed to preserve layout, camera angle, subject positions, and the original pink-teal Vice City palette while replacing flat game rendering with photoreal surfaces. In the base prompt, the creator specifies ARRI Alexa 35 capture, a Zeiss Supreme Prime 35mm T1.5 look, subtle 35mm grain, no HDR glow, and no over-sharpening.
That makes the method more like controlled re-lighting and material translation than a loose style transfer. The same thread says those re-rendered stills were then moved into LTX Studio for animation, with the creator pointing to LTX Studio as the tool used for the second stage tool link.
The prompt stack gets very granular after the first pass. For faces, the skin add-on asks for subsurface scattering, micro-pore detail, hair follicles, realistic sclera tone, and neon spill that tints skin without flattening it; if the first generation still looks waxy, the follow-up turn explicitly asks for uneven tone and more visible pore texture.
A second block targets materials by category. In the material pass, clothing gets weave, seam detail, weight, and fabric-specific behavior; vehicles get clear-coat reflections, glass refraction, tire texture, and visible panel gaps; even metal props are described in PBR terms so chrome, polymer, and blued steel react differently to the same pink-teal lighting.
The LTX outputs are short but specific. One prompt creates a third-person beach chase with two muscle cars, headlights, wet sand, HUD overlays, and a warm sunset grade beach chase. Another shifts to a dialogue shot: a mustached character in a red tie-dye shirt delivers a line with only slight jaw and eye movement, framed as a 24fps letterboxed cutscene dialogue closeup dialogue shot.
The rest of the examples show how much camera language is being packed into the prompts. There is a dirt-bike departure with a low tracking follow cam bike sequence, a boulevard walk-up in humid overcast light walk-up shot, a Countach night drive built around wet-road neon reflections Countach run, and a race-start clip centered on checkpoint signage and HUD timing race start. Together they read less like finished scenes than fast trailer previs: composition-locked keyframes first, then motion blocks with explicit camera, atmosphere, and game-UI instructions.
Topview added Seedance 2.0 to Agent V2, pairing multi-scene generation with a storyboard timeline and Business Annual access billed as 365 days of unlimited generations. That moves longform video workflows toward editable sequences instead of stitched clips.
releaseTopview added Seedance 2.0 to Agent V2, pairing multi-scene generation with a storyboard timeline and Business Annual access billed as 365 days of unlimited generations. That moves longform video workflows toward editable sequences instead of stitched clips.
workflowCreators are moving from V8 calibration complaints to darker film-still scenes, fashion shots, and worldbuilding tests, with ECLIPTIC remakes showing stronger depth and lighting. Retest saved SREF recipes if you rely on V8 for cinematic ideation.
workflowShared Nano Banana 2 workflows now cover turnaround sheets, distinctive facial traits, and photoreal rerenders that keep the framing of a reference image. Use one prompt grammar for concept art, editorial portraits, and animation prep.
releaseLuma launched Uni-1 and says it can reason through prompts while generating images. Creators report stronger composition on first pass for sketch-to-photo, multiview characters, and reference-led scenes, which should cut correction loops.
I re-rendered the GTA Vice City images and turned them into video inside @LTXStudio Here's a breakdown with more results, comparisons + prompts: