Variety reports that As Deep as the Grave used generative AI to create Val Kilmer's performance, with material supplied by his family and their backing for the release. For filmmakers, it is an early consent-based case study in digital resurrection where rights and audience expectations matter.

The core filmmaking move here is specific: As Deep as the Grave did not use archival footage as-is. Per Variety’s report, director Coerte Voorhees and his team generated Kilmer’s likeness and voice for the character Father Fintan after production delays, drawing on materials provided by Mercedes and Jack Kilmer. The article says scenes that had been at risk of being cut for budget reasons were brought back through that process.
That makes this less a novelty cameo than a post-production performance build. The first-look post points to a finished character presentation in costume and scene context, which matters for filmmakers because the creative choice extends beyond face replacement into performance continuity, character blocking, and audience acceptance.
For directors and VFX teams, the most usable takeaway is the consent structure. In a reply in the discussion, one creator argues that family approval can make this kind of work feel restorative rather than extractive, and Variety’s article makes clear that family cooperation was central to the production.
The harder creative question is tone. The film reportedly kept Kilmer’s illness-shaped voice in the character instead of generating a frictionless approximation, which suggests a more documentary-minded use of generative tools than a pure “de-aging” fantasy. That choice will likely shape how audiences judge similar AI performances going forward.
KittenTTS 0.8 ships new 15M, 40M and 80M models, including an int8 nano model around 25MB that runs on CPU without GPU. It is a fit for narration, character voices and lightweight assistants that need offline or edge-friendly speech.
updateOpenAI has removed the Sora app as creators and Hacker News users debate whether novelty never turned into durable usage. Save projects now and plan to test ChatGPT-integrated or rival video tools next.
updateCapCut is expanding Dreamina Seedance 2.0 while Topview restored access within 24 hours, and creators are stress-testing it for vertical repurposing, long prompts and stylized start frames. Try it for fast video conversions, but budget cleanup passes for continuity and transitions.
promptCreators are turning Nano Banana 2 templates into reusable prompt systems for merch shots, sports ads, editorial portraits and modular scene builds. Keep the scaffold fixed and swap only brand, lens, action or environment variables to iterate fast.
workflowRiverside's Co-Creator reads transcripts automatically and turns chat-style requests into cuts, captions, thumbnails and social copy from one workspace. Use it when you need fast repurposing without timeline scrubbing, then polish the output by hand.
link to the article variety.com/2026/film/news…
Val Kilmer died in 2025 and never filmed one damn scene of this film. AI resurrected him anyway. Thoughts??