Google says its new realtime voice model improves noisy-environment understanding, long conversations and function calling, and it's rolling into Gemini Live, Search Live and AI Studio. Voice creators can test it for lower-latency spoken interactions.

Google is pitching this release as a practical upgrade for spoken interfaces, not just a benchmark refresh. The launch materials in the main announcement show Gemini 3.1 Flash Live hitting 90.8% on ComplexFuncBench audio and 95.9% on Big Bench Audio speech reasoning, with the included charts
and
positioning it ahead of earlier Gemini native-audio versions.
For creators building voice-led experiences, the more useful change is behavioral. Google's product thread says the model keeps track of longer conversations, understands details in noisy settings, and makes function calling more reliable, which maps directly to voice agents that need to listen, remember context, and trigger tools without awkward retries. The availability update says it is already rolling into Gemini Live and Search Live, while developers can start testing it in AI Studio and dig into the fuller product write-up via Google's overview.
Glass says its Mac editor can tap existing Claude, ChatGPT and Gemini subscriptions inside one coding workspace, avoiding separate API keys and usage meters. Compare the flat-subscription workflow against Cursor-style billing before you move a product build.
updateSeedance 2.0 is now showing up across CapCut Video Studio, Dreamina and Pippit with multi-scene timelines and shot templates. Creators can use it to move from single clips to editable long-form production.
releaseRunway's new web app turns a prompt or starter image into a cut scene with dialogue, sound effects and shot pacing. Creators can now block whole sequences instead of stitching isolated clips.
releasePosts report Nano Banana 2 now offers 4K image output, and creators are using it for poster systems, hidden-object layouts and character sheets. Higher-res stills should travel better into video, branding and print workflows.
updateOfficial and partner demos show Uni-1 handling localized edits, dense layouts, manga generation and Pouty Pal chibis. Creators can reuse one model across avatar, editorial and comic workflows.
Introducing Gemini 3.1 Flash Live, our new realtime model to build voice and vision agents!! We have spent more than a year improving the model + infra + experience, the results? A step function improvement in quality, reliability, and latency.
Say hello to Gemini 3.1 Flash Live. 🗣️ Our latest audio model delivers more natural conversations with improved function calling – making it more useful and informed. Here’s what’s new 🧵 Show more