Medeo Video Skill released an open-source OpenClaw setup that lets users generate video by chat, add assets, and run jobs asynchronously after a quick API-key install. Try it if you want text-in, video-out workflows without switching across dashboards.

The release is not a standalone video app. It is an OpenClaw skill that lets an AI assistant handle Medeo video generation through chat, so the workflow starts with a text request like a coffee-brewing video and ends with a delivered link rather than a dashboard session. The repo post frames that as “text in, video link out” on a five-to-10-minute turnaround.
That matters for creative teams because the handoff is the product: prompt, render, and delivery sit inside the assistant instead of being split across upload screens, export dialogs, and revision loops.
The setup starts by sending OpenClaw a natural-language install command plus the repo reference at the GitHub repo. From there, the assistant walks through Medeo API-key setup and stores the configuration locally, according to the repo summary attached to the install post.
Once connected, the skill supports several production paths. The feature list says users can generate a ready-to-post video from one sentence, upload their own images or footage as source assets, and schedule recurring outputs while renders run asynchronously in the background.
The GitHub release at the public repo makes this more than a demo thread. Its attached summary describes template support, job tracking, history management, and controls for parameters such as orientation or length.
The same summary also notes a practical constraint: generation uses Medeo credits, so this is open-source orchestration around a paid rendering backend rather than a fully local video stack.
A shared workflow converts GTA-style stills into photoreal images with Nano Banana 2, then animates them in LTX-2.3 Pro 4K using detailed material, skin, vehicle, and camera prompts. Try it for trailer-style previsualization if you want more control at lower cost.
releaseTopview added Seedance 2.0 to Agent V2, pairing multi-scene generation with a storyboard timeline and Business Annual access billed as 365 days of unlimited generations. That moves longform video workflows toward editable sequences instead of stitched clips.
workflowCreators are moving from V8 calibration complaints to darker film-still scenes, fashion shots, and worldbuilding tests, with ECLIPTIC remakes showing stronger depth and lighting. Retest saved SREF recipes if you rely on V8 for cinematic ideation.
workflowA shared workflow converts GTA-style stills into photoreal images with Nano Banana 2, then animates them in LTX-2.3 Pro 4K using detailed material, skin, vehicle, and camera prompts. Try it for trailer-style previsualization if you want more control at lower cost.
workflowShared Nano Banana 2 workflows now cover turnaround sheets, distinctive facial traits, and photoreal rerenders that keep the framing of a reference image. Use one prompt grammar for concept art, editorial portraits, and animation prep.
The install is insane. You literally just send this to your OpenClaw assistant: "Please install the medeo-video skill. GitHub: github.com/one2x-ai/medeo…
Medeo Video Skill for OpenClaw. 100% Opensource. MIT License. Repo 👇 github.com/one2x-ai/medeo…