
Чертежи Леонардо достигают примерно 53 шаблона, 75%-ная скидка на токены — дебют Instant Animate.
Executive Summary
Чертежи Леонардо вышли за пределы демо‑площадки и вошли в реальные рабочие процессы сегодня. Скидка 75% на токены и около 53 готовых к запуску шаблонов — плюс новый Blueprint «Instant Animate», который анимирует неподвижное изображение без подсказки — превращают одноклик‑просмотры в фактический объём выдачи.
Различие состоит в охвате и контроле. Предустановленные потоки выбирают модели, настройки и блокировки идентичности, чтобы вы могли запускать 10+ последовательных вариантов от начала до конца без надзора. Творцы делятся нерыночными результатами, которые удерживают субъектов на протяжении перенастройки освещения, смены нарядов и съемки под несколькими ракурсами, а возможность создания и обмена своими шаблонами — «скоро будет» — может превратить Blueprints в рынок мини‑приложений. Если вы всё день застряли на одной и той же настройке подсказок, это более дешевый вход к стандартизированным пайплайнам.
Если вы планируете движении вместе с этим, fal’s WAN 2.2 Animate в четыре раза быстрее с предсказуемой ценой 0,08 доллара за 720p за секунду и его 8B InfinityStar делает первые кадры менее чем за 30 секунд по цене 0,07 доллара за запрос. Соедините скидочное окно Леонардо с рендерами по фиксированной ставки — на неделе вы охватите больше кадров — кофеин все еще требуется.
Feature Spotlight
Blueprint workflows hit critical mass
Leonardo Blueprints surge across creator feeds: 50+ ready-made workflows, 75% off tokens, “instant animate” works without prompts, and build‑your‑own pipelines teased—turning pro looks into one‑click starting points.
Leonardo’s Blueprints dominated creator feeds today: multiple threads demo 10+ looks, claims of 50+ templates, 75% off tokens, and first “instant animate” results. New: widespread adoption posts and hints at build‑your‑own sharing.
Jump to Blueprint workflows hit critical mass topics📑 Table of Contents
Blueprint workflows hit critical mass
Leonardo’s Blueprints dominated creator feeds today: multiple threads demo 10+ looks, claims of 50+ templates, 75% off tokens, and first “instant animate” results. New: widespread adoption posts and hints at build‑your‑own sharing.
Leonardo Blueprints surge: 75% off, ~53 templates live, build-your-own teased
Leonardo pushed Blueprints deeper into creator workflows with a limited-time 75% token discount and a wave of creator demos showing 10+ looks end to end Promo 75% off and Leonardo site. Following up on initial launch, creators shared a multi-style thread and noted there are roughly 53 Blueprints live today, with build‑your‑own and sharing “coming soon” Creator thread, 53 Blueprints note, and Thread details.
The catalog spans portraits, relighting, outfit suggestions, and multiview perspective changes, while holding identity across styles in tests Brutalist example and Multiview demo. If you’ve been waiting for a one‑click way to try consistent looks on the same subject, this is a good moment to test while costs are down.
Instant Animate Blueprint moves a still without prompts
Creators showed Leonardo’s “Instant Animate” Blueprint turning a still image into motion while preserving scene intent—no prompt needed Feature demo. It fits the emerging “mini‑apps” framing around Blueprints, where prebuilt flows handle model choice and settings for you Mini‑apps note. For motion tests and quick storyboard beats, this cuts setup time and lowers the floor for non‑prompt‑engineers.
Faster, cheaper video on fal
fal pushed tangible speed/price gains for video creators. New today: WAN 2.2 Animate boasts 4× faster inference, cleaner visuals, and fixed per‑second pricing; InfinityStar arrives with <30s inference for static+dynamic sequences. Excludes Blueprints feature.
fal’s WAN 2.2 Animate is 4× faster with $0.08/s 720p pricing
fal upgraded WAN 2.2 Animate with 4× faster inference, cleaner frames, and a clear per‑second price: $0.08/s for 720p (computed at 16 fps), with cheaper tiers at $0.06/s (580p) and $0.04/s (480p) Upgrade post, and Pricing page. Following up on Remade acquisition that aimed to speed infra, this is a concrete step creators can feel in render times.
Two endpoints are live—Replace and Move—both flagged for commercial use and ready to try now Model replace page, and Model move page. The per‑second model makes costs predictable for shot planning and batch runs Model pages.
This matters because iteration speed is workflow speed. Faster previews mean more takes, better motion choices, and tighter budgets when turning i2v drafts into client‑ready cuts.
fal launches InfinityStar: 8B text‑to‑video with <30s inference
fal unveiled InfinityStar, an 8B unified spacetime autoregressive model that generates both static visuals and dynamic sequences, with inference under 30 seconds for first frames Model drop. A hosted playground is live and priced at $0.07 per request for text‑to‑video, marked for commercial use Playground page.
The pitch is fast concept‑to‑motion for storyboards, mood films, and previz. It complements higher‑fidelity pipelines by getting narrative beats on screen quickly, then handing off to heavier models as needed Model page.
Identity control, try‑on, and full‑body recasting
Creators got multiple ways to lock identity across shots: Higgsfield Recast (full‑body swap + voice + backgrounds), EVTAR virtual try‑on, Qwen multi‑angle edits, and Vidu Q1 reference‑to‑image. Fresh examples and app links today. Excludes Blueprints feature.
Qwen‑Image‑Edit 2509 adds multi‑angle generation, pose transforms, built‑in ControlNet
Alibaba’s Qwen‑Image‑Edit 2509 update brings multi‑angle generation from a single photo, subject rotation to any view, multi‑image blending, enhanced pose transformation, scene swaps, and a built‑in ControlNet for precise control—useful for coverage across shots from one portrait feature explainer, blog post. Following up on camera control (LoRA‑based rotation), this is a broader November build aimed at identity‑safe angle changes and scene composition.
Filmmakers can now sketch full shotlists from one still: profile, three‑quarter, overhead, and low angle shots that keep lighting plausibility and facial identity intact.
EVTAR open-sources end‑to‑end virtual try‑on with unpaired references
Qihoo 360 released EVTAR, an end‑to‑end virtual try‑on model that transfers garments onto a person’s photo while using additional unpaired reference images to preserve material and design details paper page, with weights and usage on Hugging Face model card. It supports both inpainting and direct garment transfer, and reports state‑of‑the‑art quality on public benchmarks.
For fashion creators, this means more convincing outfit previews without manual segmentation or dense pose maps. The repo ships LoRA weights at 512×384 and 1024×768, plus a straightforward Conda setup Hugging Face model.
Runware ships Vidu Q1 Reference‑to‑Image: up to 7 refs at $0.055/image
Runware rolled out Vidu Q1 Reference‑to‑Image for consistent props, scenes, characters and style, letting you upload up to seven reference images; pricing starts at $0.055 per image via the API feature brief. For art directors and storyboarders, this is a fast route to on‑model alternates without retraining.
Apob AI pairs virtual try‑on with AI influencer videos for fashion
Apob AI is pitching a combo workflow: virtual try‑on plus AI influencer generation to turn fashion ideas into lifelike promo videos and shoppable content launch promo, with a product hub you can search and test today product page, Apob website. For small brands, this can stand up consistent fit visuals and an on‑brand face without a studio day.
Node canvases become the default UX
Momentum for node‑based creation continued: Krea Nodes teased, Freepik Spaces shows real shared pipelines, and Runway’s node system tutorials trend. New today: practical Spaces mockup tutorials and creators urging adoption. Excludes Blueprints feature.
Freepik Spaces pushes node-based, real-time creative workflows into teams
Freepik is actively positioning Spaces as a shared, node‑based canvas for AI creation with guides and a "Ready to try it" push. Teams get an infinite canvas, reusable workflows, and live collaboration; free users can spin up to three Spaces to start feature brief, with details on nodes, templates, and permissions in the product overview Spaces product page. A creator callout underscores momentum: Spaces is a “huge upgrade” for collaborative creative pipelines creator endorsement.
Why it matters: node canvases are becoming the default UX for mixed‑model projects. Spaces lowers the activation energy for designers and PMs who need shared pipelines that others can run, tweak, and version without leaving the canvas.
Creators say “everything is moving to nodes,” citing Runway, Freepik, Krea
A widely shared take argues that major AI tools are converging on node systems—calling out Kling, Runway, Freepik, and Krea—and urges creatives to get comfortable with node workflows now opinion thread. The post links a full YouTube walkthrough on Runway’s node interface for people switching from linear prompts to modular graphs YouTube walkthrough. A separate endorsement frames Spaces as a “huge upgrade” for collaborative pipelines, reinforcing the shift creator endorsement.
This is a continuation of a trend we covered with Runway’s workflow editor Runway Workflows: today’s update is about adoption, not features. The signal is clear—teams are standardizing on canvases where prompts, assets, and model calls sit as nodes you can rewire quickly.
Hands-on Spaces tutorial shows a 3-node mockup pipeline from prompt to result
A practical thread walks through building a Spaces workflow: Upload artwork → Image Generator (NanoBanana) → a single prompt that composes a living‑room frame mockup, then run step-by-step. The author splits steps across posts for clarity—upload node upload node, connect an Image Generator node generator node, and add the scene prompt before execution prompt step.
So what? This shows why node canvases stick: creatives can package repeatable “client-ready” mockups as run‑again graphs that teammates can clone, review, and ship without rebuilding prompts every time.
ComfyUI adds trajectory control resource for WAN ATI inside node graphs
ComfyUI highlighted "Trajectory Control in ComfyUI – WAN ATI," giving video creators more precise motion path editing within a node graph workflow resource. It’s another example of high‑leverage control landing first in node tools—keyframes and paths become reusable subgraphs instead of one‑off prompts.
For filmmakers and motion designers, this means motion logic sits alongside style and character nodes, making revisions faster and safer when clients ask for new camera moves or pacing tweaks.
Post‑gen camera edits and trajectory control
Cinematography control tightened: Veo 3.1 gained in‑app camera adjustment (position/orbit) and ComfyUI showcased WAN ATI trajectory control for path‑precise moves. Mostly tool UX updates with links. Excludes Blueprints feature.
Flow adds post‑gen Camera Adjustment for Veo 3.1 videos
Flow by Google quietly enabled an experimental Camera Adjustment tool for Veo 3.1 clips: tap the pencil, then tweak camera position and orbit on an already generated video feature announcement. HBCoop’s walkthrough shows it sitting behind the Edit button, so you can nudge framing without a full re‑generation, following up on in‑flow camera moves tests that previewed this behavior usage tip.
CamCloneMaster brings reference‑based camera control to I2V/V2V
KwaiVGI’s CamCloneMaster proposes cloning camera moves from a reference video to your generated shot, unifying image‑to‑video and video‑to‑video control and shipping a new Camera Clone Dataset alongside the paper paper brief ArXiv paper. Creators can already explore the released model and assets on Hugging Face, which is handy if you want repeatable dolly/pan arcs across variations model card Hugging Face model, with an additional project overview here paper page paper page.
ComfyUI shares WAN ATI trajectory control for path‑precise motion
ComfyUI highlighted a WAN ATI setup that lets you define motion paths inside a node graph, giving editors precise trajectory control after generation inside the workflow itself tool post. This is useful for tightening camera travel and subject motion beats without rebuilding the full prompt chain.
Assistants that actually help creatives
Hands‑on reports show assistants leveling up: Perplexity Comet’s 23% reliability bump validated on a LinkedIn task run; NotebookLM’s video overview shines with nano‑banana; Google AI Studio adds prompt-in-URL for instant setup. Excludes Blueprints feature.
Comet handles a full LinkedIn follow→find→comment workflow in one shot
Perplexity’s Comet Assistant successfully executed a multi‑step LinkedIn task—following the company CEO, locating a specific post, and posting a comment—without intervention, validating its recent reliability claims usage test. This comes after Comet upgrade noted a 23% internal performance bump and permissioned browsing; the field test reports more human‑like navigation and clear step reasoning.
For creatives, this means a practical agent you can trust with repetitive social tasks (outreach, posting, sourcing). It’s not perfect yet, but this is a real workflow, not a toy demo.
NotebookLM auto‑composes a 7‑minute video overview from a single prompt
A creator calls NotebookLM Google’s best creative product after using the Video Overview feature—with nano‑banana integration—to generate a polished 7‑minute video (images, text, and voiceover) from one well‑defined prompt feature praise. It works when you steer it precisely; prompt quality still matters.
If you storyboard from research or briefs, this collapses your first cut from hours to minutes. Treat it like a script assistant: lock structure in the prompt, then iterate visuals.
Google AI Studio adds ?prompt= to pre‑seed new chats via URL
Google AI Studio now accepts a prompt in the URL query string, opening a fresh chat pre‑seeded with your instructions (e.g., ?prompt=Gemini 3.0) feature note. It’s a small change that speeds shareable setups for teams and lets docs, templates, or UI buttons deep‑link into ready‑to‑run assistants.
Use this to standardize creative assistants across a team—ship a link that boots the right model and starting brief, then iterate in chat.
Research to watch: memory, agents, and camera logic
A dense day of papers impacting creative AI: Google’s Nested Learning (Hope) for long‑context memory, V‑Thinker for interactive visual reasoning, GUI‑360 for computer‑using agents, SAIL‑RL dual rewards, DreamGym synthetic RL, Nemotron V2 VL, and CamCloneMaster camera cloning.
Google’s Nested Learning debuts “Hope” model for human‑like continual memory
Google Research proposes Nested Learning, introducing continuum memory systems that update at different rates to cut catastrophic forgetting; the proof‑of‑concept model “Hope” reports lower perplexity and higher reasoning accuracy than standard transformers paper summary.
For creators, stronger long‑context retention means brief references across a script, storyboard, or style bible are less likely to be “forgotten” mid‑generation, improving narrative and visual consistency without prompt crutches.
CamCloneMaster clones camera motion from references for I2V and V2V
CamCloneMaster proposes reference‑based camera control that learns to replicate camera moves without explicit parameters, unified for image‑to‑video and video‑to‑video tasks; the team also ships a large synthetic Camera Clone Dataset and shows better control/quality in user studies paper page, with model resources available on Hugging Face Model card.
This matters for filmmakers: you can copy a dolly‑crane pan from a reference clip instead of hand‑typing camera curves.
DreamGym scales agent learning via synthetic rollouts; >30% better on hard tasks
“Scaling Agent Learning via Experience Synthesis” (DreamGym) trains with synthesized environment dynamics and a replay buffer seeded from offline data, then adapts by creating new tasks to challenge the policy; results show >30% gains on non‑RL‑ready tasks like WebArena while matching PPO/GRPO with only synthetic interactions paper thread, with technical details in the paper ArXiv paper.
For production assistants that browse, post, or operate UIs, this hints at faster iteration without costly real‑world runs.
GUI‑360 dataset lands with 1.2M+ executed steps for computer‑using agents
A Microsoft‑led team releases GUI‑360: a large dataset/benchmark of >1.2M executed action steps across Windows office apps, bundling screenshots, accessibility metadata, goals, and both success/failure trajectories to train/evaluate computer‑using agents paper page, with the paper and benchmark details linked here ArXiv paper.
For creative teams, this accelerates reliable desktop agents that can operate editors, file trees, and render queues on your behalf with fewer “misclicks.”
SAIL‑RL: Dual‑reward tuning teaches when and how MLLMs should think
SAIL‑RL introduces a “thinking” reward (grounding, coherence, self‑consistency) plus a “judging” reward that decides if deep reasoning is even needed, improving SAIL‑VL2 (4B/8B) and reducing hallucinations versus outcome‑only training paper page, with the methodology in the paper ArXiv paper.
The point is: fewer over‑explained but wrong answers, and more crisp, accurate calls for edits, color keys, or timing notes.
V‑Thinker teaches image‑interactive reasoning with RL and unveils VTBench
V‑Thinker frames “image‑interactive thinking,” training LMMs to focus on regions, sketch, and reason through a two‑stage RL curriculum; the authors release VTBench and report gains over prior LMM baselines on interactive vision tasks paper page, and share full details in the arXiv write‑up ArXiv paper.
This points to better step‑by‑step visual planning for scene blocking, shot lists, and layout crits inside one model loop rather than juggling separate tools.
NVIDIA Nemotron Nano V2 VL details hybrid Mamba‑Transformer for docs/video
Nemotron Nano V2 VL targets long‑document and video understanding with token‑reduction for higher throughput, and releases checkpoints in BF16/FP8/FP4 alongside datasets and recipes for reproducibility paper page, with the write‑up on performance and training assets here ArXiv paper.
This architecture is relevant to script‑plus‑storyboard comprehension, letting small models parse longer briefs with fewer tokens.
Thinking with Video: video generation as a reasoning medium
The “Thinking with Video” paper argues that temporal generation helps LLMs reason better than static text or images alone; the authors introduce VideoThinkBench and report strong results (e.g., Sora‑2 performing well on vision tasks and hitting up to 92% on MATH, 75.53% on MMMU in their setup) paper page.
If reliable, this nudges tools toward sketch‑to‑animatic‑to‑answer loops where motion clarifies cause and effect in a scene.
Veo 3.1 adds post‑gen camera adjustment in Flow’s editor
Flow by Google surfaces a new “Camera Adjustment” in the editor—tap the pencil to orbit/position the camera on a generated clip feature brief, with a second demo showing the Edit flow entry point how to use, following up on Camera control tests that showed creators stress‑testing Veo‑3.1’s camera tools.
This improves fix‑it passes on shots when the motion reads but the angle doesn’t—no prompt rewrite needed.
Feeds, labels, and where your videos live
Distribution rules shifted: Meta’s Vibes AI video feed expands to Europe and adds invisible watermarks; TikTok’s study shows high AI optimism with low full adoption; Sora’s Android debut topped ~470k day‑one installs. Excludes Blueprints feature.
Meta rolls out Vibes AI video feed across Europe
Meta expanded Vibes—an AI‑generated short‑video feed—across Europe inside the Meta AI app, enabling prompt‑to‑clip creation, music/image layers, remixing, and sharing to Stories/Reels Vibes Europe rollout. Meta adds that AI content generation inside Meta AI has grown more than 10× since launch, giving creators a fresh distribution rail plugged into a massive social graph Vibes Europe rollout.
Meta adds invisible watermarking for AI videos on Facebook and Instagram
Meta introduced an invisible, edit‑resilient watermark for AI‑made or AI‑edited videos that encodes who posted it, whether AI was used, and which model/platform contributed—without changing video quality Watermark overview. For brands and creators, this improves provenance and credit while preserving creative latitude.
Sora for Android hits ~470k day‑one installs, 4× the iOS launch
OpenAI’s Sora Android debut reached about 470,000 installs on day one, over 4× the iOS launch, after removing invites and expanding to seven regions including Japan, South Korea, Taiwan, and Thailand Launch stats. Strong early traction signals a rapidly growing pool of mobile AI video creators and distribution potential.
TikTok × NewtonX: 90% expect AI to drive growth, but only 19% fully integrated
TikTok and NewtonX report that 90% of advertisers expect AI automation to drive future growth, 93% of executives see productivity gains, yet only 19% of companies have fully integrated AI due to privacy, skills, and rapid change barriers Survey highlights TikTok report. Creatives should expect warmer budgets but still‑fragmented workflows and a premium on proven, privacy‑safe pipelines.
Business moves that affect creatives
Regional access and enterprise momentum: Anthropic opens Paris/Munich amid 9× EMEA revenue growth; in India, Gemini Pro’s Jio bundle undercuts ChatGPT Go with 18‑month perks and 2TB storage. Also: Kimi K2’s $4.6M training cost shared.
Google India and Jio pitch 18‑month Gemini Pro bundle vs ChatGPT Go’s 12‑month free
In India, Google and Jio are advertising an 18‑month Gemini Pro bundle they value at ₹35,100, undercutting OpenAI’s 12‑month free ChatGPT Go offer pegged at ₹4,788. The table highlights perks like video generation, AI in Gmail and other Google apps, Search grounding for real‑time info, 2 TB storage, and unlimited chats/image uploads Offer comparison.
For independent creators, small studios, and students, the math is simple. This is more storage, more integrations, and a longer free runway. If you’re producing daily videos, heavy image batches, or collaborating inside Workspace, this can compress your TCO for the next year.
What to test now:
- Route drafts and reviews through Gmail/Drive to see if grounding reduces hallucinated facts.
- Stress video generation for time‑to‑first‑frame and subject consistency on typical briefs.
- Compare export and storage workflows against your current stack before committing.
INPUT: t:3
Anthropic opens Paris and Munich as EMEA posts 9× revenue, 10× more large accounts
Anthropic is expanding in Europe with new offices in Paris and Munich after saying EMEA is now its fastest‑growing region. The company cites 9× run‑rate revenue growth and a 10× increase in large business accounts over the past year, with enterprises like L’Oréal, BMW, SAP, and Sanofi already on board Expansion brief.
For creative teams, this signals more local support for Claude rollouts, procurement, and security reviews. It also suggests more regional enablement for media, retail, and pharma content pipelines. That matters if your studio needs data‑residency assurances or enterprise features to green‑light AI inside production.
Deployment impact: A deeper European footprint usually shortens legal and IT due diligence cycles. It can also unlock joint programs with incumbents (systems integrators, cloud providers) who already serve large media and design accounts. Expect tailored onboarding for French and German enterprises next.
Who should care: Agency network leads, in‑house creative ops, and EU‑based film/game studios evaluating assistant tooling or content safety guardrails.
INPUT: t:69
Report pegs Kimi K2 Thinking’s training cost at ~$4.6M amid China model race
Moonshot’s latest Kimi update reportedly cost about $4.6 million to train, according to a graphic circulating alongside coverage of China’s accelerating model releases Cost claim. The headline is the number. It frames what competitive capability can be built on a mid‑single‑digit million budget.
This lands after K2 Thinking posted strong agent/tool and browsing scores—see K2 metrics for prior evals—which makes the cost context more meaningful for studios and startups budgeting their own model fine‑tunes or vertical assistants.
Why it matters: If a top‑tier agentic model iteration can be trained for ~$4.6M, more regional players and vertical tools can credibly target “good enough” reasoning at a fraction of frontier budgets. For creatives, that likely means faster competition on price‑per‑token and more specialized assistants for video, design, and music workflows over the next quarters.
INPUT: t:41
Prompt packs and srefs creators shared today
A strong day for reusable looks: fashion runway prompt recipe, MJ v7 sref collage, neo‑noir graphic style sref, cfryant’s cinematic flat‑lay low‑angle shot, and bri_guy’s Freakbag style tokens. Mostly image‑gen recipes for quick wins. Excludes Blueprints feature.
Fashion runway prompt format with swatches and sketch notes
Azed shares a reusable fashion runway prompt that bakes in subject inspiration, two color swatches, dramatic lighting, exaggerated pose, and margin sketch notes—great for editorial sheets and moodboards Prompt recipe.
It’s a compact template you can drop into any image model to get consistent runway plates with callouts and palette blocks. Works well for fast concept passes before video or animation.
Freakbag Round 2: five reusable srefs for surreal monster looks
Bri_guy_ai returns with Round 2 of Freakbag—five style tokens shared as srefs with samples, plus a link back to last week’s set for more variety Set intro. New drops include 1060271589 133792642 Freakbag 5, along with earlier posts bundling Freakbag 1–4 and a recap thread Freakbag 1 Freakbag 2 Freakbag 3 Freakbag 4 Collection recap.
These tokens are quick wins for stylized creatures, fashion‑horror plates, or album art—great for batch explorations before locking a look.
Midjourney V7 sref + params pack for crisp editorial looks
Following up on MJ V7 sref, today’s share adds a fresh sref (751524126) with a tight V7 param combo: --chaos 18 --ar 3:4 --exp 15 --sw 500 --stylize 500—delivering clean, punchy portraits and product shots Param set.
If you need a dependable baseline look for character sheets or hero objects, this setup is a solid starting point you can layer onto with light or texture tweaks.
Extreme low‑angle + ceiling flat‑lay prompt to build tension
Cfryant shares a cinematic still recipe: extreme low angle on a worried subject, ceiling as a flat‑lay of objects (knives, clocks, goblets, etc.) with a dialed param set — --ar 16:9 --p pmijtlr --c 12 --no blur,dof --s 0 --q 4 --v 7.0 Prompt details.
It’s a reliable framing trick for thriller beats and title cards. Swap the object set to shift genre without changing composition.
Neo‑noir graphic style sref for dark comic realism
Artedeingenio drops an MJ style reference (--sref 3660316281) that nails 80s/90s mature‑comics grit: heavy ink, cross‑hatch shadows, and expressionist lighting—the brief calls it Neo‑Noir Graphic Style / Dark Comic Realism Style reference.
Use this when you want illustrated frames that read like crime or horror covers, then iterate poses and props to build a cohesive visual short.
Thread‑texture sref for woven, tactile aesthetics
A separate sref share (--sref 2186594613) focuses on richly woven thread textures that read like layered embroidery across portraits, landscapes, and objects Texture sref.
Handy when you want a crafted, tactile feel without leaving the digital pipeline—use it to push moodboards, packaging comps, or title sequences toward a textile motif.
Release watch and agentic scorecards
Rumors and evals: GPT‑5.1 Pro “imminent” claims and a Nov 24 date tease; Kimi K2 Thinking tops a τ²‑Bench agentic task at 93%; community polls who leads among top labs. Excludes Blueprints feature.
GPT‑5.1 Pro looks imminent as code refs surface; Nov 24 teased
New strings referencing gpt‑5.1‑pro access and a “reasoning” variant showed up in ChatGPT workspace code, and one watcher floated Nov 24 as the likely drop code reference, date tease.
Some are also spotting a “Polaris Alpha” alias on OpenRouter, which would fit pre‑deployment staging OpenRouter rumor. If you own evals, lock a baseline now so you can measure before/after within hours of release.
Kimi K2 Thinking tops τ²‑Bench agentic tool use at 93%
Moonshot’s K2 Thinking posted 93% on τ²‑Bench (Telecom, agentic tool use), edging out GPT‑5 Codex variants in the shared chart benchmarks claim.
Following up on K2 scores where K2 led HLE and BrowseComp, this strengthens its claim on tool‑heavy workflows. The team pegs K2’s training spend around $4.6M, underscoring efficiency relative to frontier labs training cost slide. If your pipeline leans on browsing and actions, schedule a head‑to‑head this week.
Creators debate current model leader; note Anthropic hasn’t open‑sourced
A community pulse asked who’s ahead right now across OpenAI, Google, xAI, Moonshot and more, with a follow‑up noting that everyone but Anthropic has open‑sourced at least one model line community question, open source note.
Why this matters for builders: open weights change how you prototype on‑prem and wire LoRAs into nodes. Keep a shortlist of open contenders to slot into your weekend tests.
Contests and community spotlights
Opportunities and wins: Hailuo’s Horror Film Contest (Nov 7–30) with 20k credits prize and clear steps, OpenArt MVA 9‑day reminder, and fal × BRIA FIBO Halloween winner announcement. Excludes Blueprints feature.
Hailuo Horror Film Contest opens Nov 7–30 with 20,000 credits prize
Hailuo launched a horror short film contest running Nov 7–30 with a top award of 20,000 credits (equivalent to a free Max plan) for creators using its SOTA video model. Entries must be posted on TikTok, X, Instagram, or YouTube with #HailuoHorror and @Hailuo_AI, then submitted via the official portal. See dates and prize in Dates and prize, entry steps in How to enter, and the submission page in Contest portal.
fal × BRIA FIBO Halloween winner named; $500 in credits awarded
fal announced u/Brandin_elm as the FIBO Halloween Competition winner, earning $500 in credits, and shared sample visuals from the entry for the r/fal community. The post serves as a community spotlight and a nudge to try fal’s hosted media models. See the announcement in Winner post.
OpenArt MVA countdown: 9 days left to submit AI music videos
OpenArt reminded creators there are 9 days left to submit to its Music Video Awards, which spotlight AI‑assisted videos themed around emotions and carry judging plus artist shout‑outs. This follows Times Square promo where entries were showcased publicly; full program details and rules are on the site in Program page. See the reminder in Deadline reminder and a featured entry highlight in Creator spotlight.