Ollama added scheduled /loop prompts for Claude Code, enabling recurring research, reminders, bug triage, and PR checks. Use it to automate background routines in local or self-hosted agent setups without adding a separate scheduler first.

/loop prompts on a schedule, turning recurring prompts into built-in automation for coding and research workflows launch post.Ollama's announcement says Claude Code can now "run prompts on a schedule" with /loop, and the example command is a simple recurring prompt: "Give me the latest AI news every morning" from the launch thread. That makes /loop look less like a one-off agent command and more like a lightweight scheduler embedded directly in the coding tool.
The thread keeps the initial scope concrete. Ollama uses the PR example for pull-request checks, the research example for recurring research, and the bug example for bug reporting and triage. A separate reminder post adds reminders, which suggests the feature can handle both repo automation and general recurring prompts.
Ollama's Claude Code docs position the integration around Claude Code running against open models through Ollama's Anthropic-compatible API. The documentation summary names models including qwen3.5, glm-5:cloud, and kimi-k2.5:cloud, and says Claude Code can be launched either with quick commands or manual environment-variable configuration, according to the docs post.
The same documentation gives one operational constraint that matters for engineers: it recommends "at least 64,000 tokens" of context for better results, as described in the docs summary. That means the scheduling feature is new, but the practical rollout depends on the model and context budget behind the Claude Code session, not just the /loop command itself.
Anthropic is testing a new /init flow that interviews users and configures Claude.md, hooks, and skills in new or existing repos. Try it in a sandbox repo, then watch for skills behavior differences between chat and web surfaces.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
Ollama can now run prompts on a schedule in Claude Code. Stay on top of work by setting automated tasks or reminders. ollama launch claude /loop Give me the latest AI news every morning Examples in thread
More information on using Claude Code with Ollama: docs.ollama.com/integrations/c…