OpenAI GPT‑5.1 ships adaptive reasoning – 8 preset styles and 2 models cut prompt boilerplate

Executive Summary

OpenAI rolled GPT‑5.1 into ChatGPT, pairing a faster Instant with a Thinking model that adapts how much it reasons per task. The update adds 8 preset styles plus warmth/emoji tuning, trims prompt boilerplate, and quietly moves GPT‑4.1 into Legacy.

API access lands this week via gpt‑5.1‑chat‑latest and gpt‑5.1, so you can start routing real work. Early traces show Thinking spends fewer tokens on 10th–30th percentile tasks and more at 70th–90th, rather than a fixed chain‑of‑thought. Users report tighter instruction‑following (e.g., honoring style bans), less sycophancy than GPT‑4o, and stronger prose; community sleuthing ties OpenRouter’s “polaris‑alpha” to 5.1, which has been topping a Creative Writing v3 board.

Codex is picking up 5.1 as well, with a gpt‑5.1‑codex slug already merged—useful for agentic coding stacks that want one planning brain across IDE and CLI. If you’ve got questions on migration and personalization quirks, OpenAI’s AMA is set for 2 PM PT today; bring real prompts and watch token spend before you flip the switch.

Feature Spotlight

Feature: GPT‑5.1 rolls out with adaptive reasoning and new personas

OpenAI ships GPT‑5.1 (Instant/Thinking) with adaptive reasoning and built‑in personas; API this week, GPT‑4.1 sunsets. A noticeable quality/UX bump for product teams and agents without re‑prompting.

Cross‑account, high‑volume story. GPT‑5.1 (Instant/Thinking) rolls out in ChatGPT with adaptive thinking, better instruction‑following, and 8 preset styles plus warmth/emoji tuning. API lands this week; GPT‑4.1 enters legacy. Creative writing quality up.

Jump to Feature: GPT‑5.1 rolls out with adaptive reasoning and new personas topics

Table of Contents

Stay in the loop

Get the Daily AI Primer delivered straight to your inbox. One email per day, unsubscribe anytime.

Feature: GPT‑5.1 rolls out with adaptive reasoning and new personas

Cross‑account, high‑volume story. GPT‑5.1 (Instant/Thinking) rolls out in ChatGPT with adaptive thinking, better instruction‑following, and 8 preset styles plus warmth/emoji tuning. API lands this week; GPT‑4.1 enters legacy. Creative writing quality up.

OpenAI rolls out GPT‑5.1 (Instant/Thinking) in ChatGPT with adaptive reasoning

GPT‑5.1 is rolling out to ChatGPT, adding a warmer Instant model and a Thinking model that adapts how much it "thinks" per task, improves instruction following, and aims for more natural conversation Rollout note, backed by the official write‑up in the <u>OpenAI post</u>. OpenAI highlights better reasoning and chat quality, with availability beginning this week for paid tiers and expanding after Release summary.

API this week: gpt‑5.1‑chat‑latest (Instant) and gpt‑5.1 (Thinking) arrive

OpenAI says API access lands later this week with model IDs gpt‑5.1‑chat‑latest (Instant) and gpt‑5.1 (Thinking), both with adaptive reasoning API timing, as outlined in the <u>OpenAI blog post</u>. Codex users were told 5.1 support is coming once the API is live Codex note.

ChatGPT adds 8 preset styles and experiments with warmth/emoji tuning

OpenAI surfaced eight base styles (Default, Professional, Candid, Quirky, Friendly, Efficient, Nerdy, Cynical) in Personalization, and is A/B‑testing sliders for warmth and emoji frequency Styles list, Warmth experiment. This reduces prompt boilerplate and gives teams a consistent voice without custom system prompts.

GPT‑5.1 Thinking varies token spend: less on easy, more on hard tasks

OpenAI’s chart shows GPT‑5.1 Thinking using fewer tokens on the 10th–30th percentile tasks and more at the 70th–90th percentile, indicating adaptive "thinking" time rather than fixed CoT depth Thinking chart, also echoed by others sharing the same plot Chart repost.

GPT‑4.1 moves under Legacy in ChatGPT as 5.1 becomes default path

Screens in ChatGPT show GPT‑4.1 being moved into a "Legacy" section, signaling gradual retirement as GPT‑5.1 rolls out Legacy note. Another view shows the Legacy list visible in the model picker for continuity while migrations proceed Legacy list.

Instruction following tightens: users report better style compliance in 5.1

Early tests show GPT‑5.1 obeying stricter style constraints—such as excluding em dashes when told to—suggesting crisper adherence to custom instructions Style compliance. OpenAI also acknowledged these personalization behaviors in replies about the presets and tuning OpenAI reply.

OpenAI Codex adds "gpt‑5.1‑codex" model definition; Codex to pick up 5.1

A merged PR in the Codex repo includes a "gpt‑5.1‑codex" test slug, signaling a 5.1‑based coding path coming to Codex Repo commit, with the commit visible in the <u>GitHub PR</u>. A Codex team note also said you’ll be able to use 5.1 in Codex once models hit the API Codex note.

Polaris‑alpha on OpenRouter confirmed as GPT‑5.1; tops Creative Writing v3

Community posts assert OpenRouter’s "polaris‑alpha" is GPT‑5.1, with screenshots showing it at the top of the Creative Writing v3 Elo leaderboard Mapping claim, Benchmark image. This aligns with broader sentiment that 5.1’s prose is more engaging even before formal evals land.

Users observe less sycophancy vs GPT‑4o; tougher on bad ideas

Field tests show GPT‑5.1 pushing back more on poor suggestions that prior chat models would rubber‑stamp, reducing "agreeableness" failure modes in brainstorming Sycophancy example, with others echoing that 5.1’s appeal is more about steerability than raw IQ Customization focus.

OpenAI schedules Reddit AMA about GPT‑5.1 and customization (2 PM PT)

OpenAI is hosting a Reddit AMA to address GPT‑5.1 and the new personalization controls, set for 2 PM PT AMA timing. This is a useful venue to clarify migration timelines, safety routing, and tuning behavior across presets.


AI datacenters: $50B Anthropic build and Microsoft’s AI Superfactory

Infra/capex beat. Anthropic moves to own DCs (TX/NY) with 2026 online dates; Microsoft details Fairwater 2 scale and 100k+ GB300s for inference. Excludes model feature.

Anthropic commits $50B to US AI data centers in Texas and New York

Anthropic says it will build its own AI infrastructure in Texas and New York, investing $50 billion and starting to bring sites online in 2026; the company cites thousands of jobs and higher power efficiency for frontier training and inference announcement thread, with full details in the first‑party post Anthropic blog. This is Anthropic’s first move to own DC capacity rather than rely only on partners, which matters for cost control, availability, and safety research velocity.

Microsoft showcases Fairwater 2 “AI Superfactory,” says 100k+ GB300s online this quarter

Microsoft gave an on‑camera tour of Fairwater 2, a new hyperscale AI datacenter design, and said more than 100,000 GB300 GPUs will come online this quarter to serve inference across the fleet feature brief. The segment also covers business model implications, in‑house chips, and power/cooling topology—useful for teams planning capacity, routing inference, and negotiating reserved instances.

IEA: 2025 DC investment (~$580B) set to exceed new oil projects (~$540B)

Global data‑center spend this year is projected at ~$580B, topping ~$540B for new oil supply, with the IEA expecting DC electricity use to roughly double to ~945 TWh by 2030; supply chains for cables, transformers, turbines, and minerals remain tight techcrunch brief. This adds scale to earlier capex signals $300B outlook and frames why power strategy, heat reuse, and siting now belong on product roadmaps.

Morgan Stanley flags 44 GW US power shortfall for AI DCs by 2028

A Morgan Stanley note warns the US could be short ~44 GW of power capacity to meet AI data‑center demand by 2028, underscoring grid constraints and siting risk for large clusters power shortfall note, with the underlying report summarized here Yahoo Finance. Some argue model efficiency will improve and blunt demand curves, but planners shouldn’t assume linear gains efficiency comment.


Stay first in your field.

No more doomscrolling X. A crisp morning report for entrepreneurs, AI creators, and engineers. Clear updates, time-sensitive offers, and working pipelines that keep you on the cutting edge. We read the firehose and hand-pick what matters so you can act today.

I don’t have time to scroll X all day. Primer does it, filters it, done.

Renee J.

Startup Founder

The fastest way to stay professionally expensive.

Felix B.

AI Animator

AI moves at ‘blink and it’s gone’. Primer is how I don’t blink.

Alex T.

Creative Technologist

Best ROI on ten minutes of my day. I’ve shipped two features purely from their daily prompts.

Marta S.

Product Designer

From release noise to a working workflow in 15 minutes.

Viktor H

AI Artist

It’s the only digest that explains why a release matters and shows how to use it—same page, same morning.

Priya R.

Startup Founder

Stay professionally expensive

Make the right move sooner

Ship a product

WebEmailTelegram

On this page

Executive Summary
Feature Spotlight: Feature: GPT‑5.1 rolls out with adaptive reasoning and new personas
✨ Feature: GPT‑5.1 rolls out with adaptive reasoning and new personas
OpenAI rolls out GPT‑5.1 (Instant/Thinking) in ChatGPT with adaptive reasoning
API this week: gpt‑5.1‑chat‑latest (Instant) and gpt‑5.1 (Thinking) arrive
ChatGPT adds 8 preset styles and experiments with warmth/emoji tuning
GPT‑5.1 Thinking varies token spend: less on easy, more on hard tasks
GPT‑4.1 moves under Legacy in ChatGPT as 5.1 becomes default path
Instruction following tightens: users report better style compliance in 5.1
OpenAI Codex adds "gpt‑5.1‑codex" model definition; Codex to pick up 5.1
Polaris‑alpha on OpenRouter confirmed as GPT‑5.1; tops Creative Writing v3
Users observe less sycophancy vs GPT‑4o; tougher on bad ideas
OpenAI schedules Reddit AMA about GPT‑5.1 and customization (2 PM PT)
🏭 AI datacenters: $50B Anthropic build and Microsoft’s AI Superfactory
Anthropic commits $50B to US AI data centers in Texas and New York
Microsoft showcases Fairwater 2 “AI Superfactory,” says 100k+ GB300s online this quarter
IEA: 2025 DC investment (~$580B) set to exceed new oil projects (~$540B)
Morgan Stanley flags 44 GW US power shortfall for AI DCs by 2028
🧪 Frontier model signals: Gemini Live upgrade, Gemini 3 sightings, free VLM
‘Riftrunner’ Gemini 3 Pro strings and LM Arena tests surface ahead of launch
Gemini Live gets accents, deeper tone/nuance for more natural voice chats
Free video‑capable VLM: NVIDIA Nemotron Nano 12B 2 VL lands on OpenRouter
Perplexity preps Kimi K2 Thinking integration; UI strings already visible
🧑‍💻 Agentic coding stacks: Claude Code, SDK demos, Code Arena, Droid hooks
Claude Code expands to web for Team/Enterprise and to iOS, enabling parallel repo work
LM Arena launches Code Arena; new WebDev leaderboard shows Claude and GPT‑5 on top
Anthropic posts open Deep Research demo for Claude Agent SDK with subagents + repo-writer
Factory 1.10 adds MCP Manager (OAuth), completion sounds, and Droid lifecycle hooks
Cursor guidance: Composer‑1 for low‑latency coding, model compare, and ‘Switch to Agents’
RepoPrompt bridges Composer‑1 and GPT‑5 via MCP; CLI provider UI lands
📊 Evals shift to live app builds and side‑by‑side analyses
LM Arena launches Code Arena; new WebDev leaderboard led by Claude
ResearchRubrics: new rubric‑based benchmark shows top agents ≈67% compliance
Field test: GLM‑4.6 beats MiniMax M2 on a React/Vite CRM build
Vals AI adds Claude vs ChatGPT comparison and hosts fluid benchmarking talk
“AI job interview” eval adds 95% CIs over 10×10 trials
Dexter evals go live with LangChain runner wiring
Side‑by‑side: ChatGPT 5.1 personalities materially change advice and structure
🧩 MCP realities: stability contracts and pull‑based limits
MCP has no stability guarantees; clients re‑read specs each call
Pull‑based MCP makes agents think to fetch; push/event hooks are missing
McPorter wraps MCP as typed TS calls and packages servers as a CLI
⚙️ Compute hardware: quantum 300mm and accelerator IP access
IBM shifts quantum to 300mm; debuts Nighthawk and Loon with faster path to advantage
Microsoft says it has access to all OpenAI accelerator IP, easing Maia pace concerns
🧮 Deterministic RL and training reproducibility
Bitwise‑consistent RL across train and inference lands in open source
TorchTitan details batch‑invariant backward to match vLLM kernels in RL
💼 Enterprise and funding: agentic web infra nets $100M
Parallel raises $100M to build agent‑grade web infrastructure (~$740M val)
Zhipu’s GLM‑4.6 goes live on Gradient Parallax for broader enterprise access
📚 Research: human‑aligned vision, UI grounding, and algorithm discovery
DeepMind aligns vision models to human concepts, boosting few‑shot and OOD performance
GroundCUA + GroundNext set SOTA desktop UI grounding with 56k screens, 3.56M labels
Study: reasoning models collapse abruptly as lookahead/branching depth rises
🛡️ User privacy, data access, and copyright rulings
OpenAI fights NYT demand for 20M private chats, tees up client‑side encryption
German court: OpenAI’s lyric memorization violates copyright; damages, cease‑use and disclosures ordered
🎨 Creative stacks: Marble world model, fast edit LoRAs, celebrity voices
World Labs launches Marble: text/image/video → editable 3D worlds, now GA
fal.ai ships Qwen Image Edit Plus LoRA gallery for precise, fast image edits
ElevenLabs debuts Iconic Marketplace with licensed celebrity voices; Michael Caine joins
Grok Imagine 1.0 visuals draw praise for near‑Midjourney look in creator tests
🤖 Embodied AI: Claude‑assisted robotics and humanoid cadence
Anthropic’s Project Fetch: Claude team completes 7/8 robot‑dog tasks in ~½ the time
XPENG shows two humanoids running in sync as IRON cadence builds
Perceptron debuts “Physical AI” platform with Isaac‑0.1 and Python SDK