Google extended its OpenAI compatibility layer so existing OpenAI SDK code can call Veo 3.1 video generation and Gemini image models with only base URL and model changes. It lowers migration cost for teams that want multimodal fallbacks without rewriting client code.

/v1/videos, while image generation uses images.generate for Nano Banana and related Gemini image models, as described in the launch thread and the docs post.Google's update is narrow but useful for engineers already standardized on OpenAI client libraries. Phil Schmid's launch thread says Veo 3.1 and Gemini image models are now available through the OpenAI compatibility layer, with video requests routed through /v1/videos and image generation through images.generate. The linked compatibility docs frame this as "drop-in compatible" with the OpenAI Python and JavaScript SDKs.
That matters because this is not a new SDK story; it's a surface-area expansion inside an existing compatibility layer. Instead of adopting a Gemini-native client first, teams can keep the OpenAI package and swap the provider endpoint and model names, which the docs post says requires changing the API key, base URL, and selected model.
Google's code example shows the pattern directly: initialize OpenAI(...) with a Gemini API key, point base_url at Google's OpenAI-style endpoint, then call client.videos.create with model="veo-3.1-generate-preview". The sample polls client.videos.retrieve(response.id) until the job reaches completed, then returns a video URL.
The same migration pitch appears across the rollout. Logan Kilpatrick's supporting post repeats that Nano Banana and Veo work after updating three lines, while the docs summary adds that the compatibility layer also carries Gemini reasoning controls for supported models. For teams testing multimodal fallbacks or provider portability, the key implementation detail is that video and image generation now fit the same OpenAI-style client shape they may already have in production.
MiniMax introduced a flat-rate Token Plan that covers text, speech, music, video, and image APIs under one subscription. It gives teams one predictable bill across modalities and can be used in third-party harnesses, not just MiniMax apps.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
Nano Banana and Veo are now supported in our OpenAI compat layer by updating 3 lines of code! 🍌
Veo 3.1 and Gemini image models are now available through the OpenAI compatibility layer, no SDK swap needed. 🎬 Generate videos via `/v1/videos` using Veo 3.1 🍌 Generate images with Nano Banana via `images.generate` 🔌 Drop-in compatible with OpenAI Python and JS SDKs 🔀 Show more