OpenClaw added Ollama as an official provider through openclaw onboard --auth-choice ollama, alongside documented OpenAI-compatible self-hosted backends such as vLLM. Use it to run Claw workflows against local or custom models instead of a single hosted stack.

openclaw onboard --auth-choice ollama, and the setup flow shown by Ollama launch includes provider selection inside OpenClaw’s existing gateway wizard.Ollama is now an official OpenClaw provider, which matters because model access moves into the same onboarding path as the rest of the product instead of requiring a custom bridge. In the launch post, Ollama says “all models from Ollama will work seamlessly with OpenClaw,” and the accompanying screenshot shows the exact entry point: openclaw onboard --auth-choice ollama.
The setup flow also exposes deployment assumptions that matter for engineers. The [img:6|Onboarding screenshot] shows OpenClaw’s gateway warning that the stack is “personal-by-default,” with shared or multi-user use requiring lock-down, then defaults to loopback bind, token auth, and Tailscale exposure off. It also shows an Ollama base URL on localhost and an Ollama mode selector with “Cloud + Local,” which suggests the provider abstraction is meant to span both local weights and Ollama-hosted endpoints from the same chat surface.
No. Ollama is the new official provider, but OpenClaw is also being positioned around OpenAI-compatible backends. In the vLLM walkthrough, the vLLM team says running OpenClaw with your own model is “surprisingly easy and fast”: deploy the model with vLLM, expose an OpenAI-compatible API, and point OpenClaw at that endpoint.
That post adds the implementation detail engineers care about most: “tool calling works out of the box,” which means agent workflows do not need a provider-specific rewrite when the serving layer is swapped. The demo uses Kimi K2.5 as the example model and shows a vLLM server coming up before the OpenClaw UI successfully invokes tools Quick demo. Together with the Ollama launch, that makes the new provider less of a one-off integration and more of a clearer story around local and self-hosted inference targets.
The operational use case is already visible in maintainer posts. In the cron-job example, Steinberger says an OpenClaw mention-blocker “runs every 5 min” and filters “spam/reply guy/promo stuff,” with the attached digest showing dozens of automated moderation decisions [img:2|Digest screenshot].
OpenClaw’s provider story is getting simpler faster than its plugin story. Steinberger said he wants plugins to become “more powerful” while making core “leaner,” and he specifically called out support for “claude code/codex plugin bundles” as work in progress plugin roadmap. A follow-up reply from him — “I’m about to land this!” — suggests at least some of that work is moving quickly follow-up reply.
But the current limits are explicit in the DenchClaw discussion. The linked GitHub issue says some pieces could become plugins — custom tools, prompt-build hooks, and model-routing hooks — while major parts cannot. The architectural blockers listed there include serving a full Next.js app, terminal emulation over WebSockets with node-pty, a sandboxed app runtime, and custom chat orchestration with its own agent pool and SSE transport. That distinction matters: OpenClaw is getting easier to point at local models, but turning large opinionated forks into drop-in extensions still requires new API surfaces rather than just more providers.
Claude can now drive macOS apps, browser tabs, the keyboard, and the mouse from Claude Cowork and Claude Code, with permission prompts when it needs direct screen access. That makes legacy desktop workflows automatable, and Anthropic is pairing the push with more background-task support for longer agent loops.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
Ollama is now an official provider for OpenClaw. openclaw onboard --auth-choice ollama All models from Ollama will work seamlessly with OpenClaw. 🦞 Use it for the tasks you want, all from your chat app. Thank you @steipete for helping and reviewing. 🦞 Show more
Hey @steipete, as you said DenchClaw would make a great plugin, I answered elaborately here on why it wouldn’t be possible under the current structure:
Thinking how we can evolve openclaw plugins to be more powerful while also making core leaner. Also wanna add support for claude code/codex plugin bundles. Good stuff coming soon!