Skip to content
AI Primer

Explore what's new in AI

Where people deep in AI come to stay current.

Filters

Category

Tags

Breaking

Google introduces Gemini Intelligence on Android with browser use, AppFunctions, and Rambler

Google unveiled Gemini Intelligence at the Android Show with cross-app task automation, Gemini in Chrome, Rambler voice cleanup, custom widgets, and AppFunctions. The rollout moves Gemini into core Android workflows on Pixel and Galaxy devices this summer.

Google introduces Gemini Intelligence on Android with browser use, AppFunctions, and Rambler
New
Gemini·12th May·5 min read
Breaking

agent-tui 0.2.0 adds markdown rendering, tool approvals, and local Gemma 4 support

agent-tui shipped v0.2.0 with markdown rendering, tool approvals, configurable reasoning views, and an AI SDK-only dependency chain. The demo also showed Gemma 4 31B running locally, so the terminal UI now covers hosted and on-device models.

agent-tui 0.2.0 adds markdown rendering, tool approvals, and local Gemma 4 support
New
Agent Framework·12th May·2 min read
Breaking

Sentence Transformers 5.5.0 adds train-sentence-transformers skill with one-shot 0.8856 NDCG@10

Sentence Transformers 5.5.0 adds an agent skill for fine-tuning embeddings, rerankers, and sparse encoders from Claude Code, Codex, Cursor, and Gemini CLI. The author reports a one-shot German embedding run rising from 0.6720 to 0.8856 NDCG@10 on a local PC.

Sentence Transformers 5.5.0 adds train-sentence-transformers skill with one-shot 0.8856 NDCG@10
New
Reranking·12th May·4 min read
See all stories →
🤖Agentic Engineering(27)
🧩Agent Development(4)
🧠Models & APIs(2)
🎙️Voice Agents(3)
Inference & Infrastructure(2)
🔒Security & Reliability(5)
💰Cost & Operations(1)
🔬Research & Benchmarks(2)
📊Business & Policy(2)

Top storiesthis week

Breaking

Thinking Machines introduces interaction models with 200 ms full-duplex audio, video, and tool use

Thinking Machines previewed interaction models that process audio, video, and text in 200 ms micro-turns, letting the system listen, speak, and react at the same time. The demos matter because the interaction loop is trained into the model instead of stitched together from separate speech and tool layers.

Thinking Machines introduces interaction models with 200 ms full-duplex audio, video, and tool use
New
Realtime AI·11th May·6 min read
See all stories →
AI PrimerAI Primer

Your daily guide to AI tools, workflows, and creative inspiration.

© 2026 AI Primer. All rights reserved.