SpargeAttention2 claims 95% sparsity and 16.2× speedup – video diffusion

Stay in the loop

Free daily newsletter & Telegram daily report

Join Telegram Channel

Executive Summary

A research thread spotlights SpargeAttention2, a trainable sparse-attention method for video diffusion; authors claim 95% attention sparsity and 16.2× speedup via hybrid top-k + top-p masking plus distillation fine-tuning. The promise is cost/latency relief for long or higher-fidelity video generation, but the feed only shows a paper card/figures; no reproduced third-party benchmarks or implementation details are surfaced in-thread, so the headline numbers remain self-reported.

ggml.ai / llama.cpp + Hugging Face: ggerganov’s ggml/llama.cpp team says it’s joining HF to keep local AI “truly open”; framed as a continuity/maintenance move rather than a new model release.
Pika AI Selves: Pika launches persistent persona agents with “memory” and a social layer (“on Twitter”); early access is distributed via quote-RT → DM codes; concrete memory tests aren’t shown.
Magnific Video Upscaler: Freepik/Magnific push a finishing-pass product with 4K output, presets + custom, Turbo mode, FPS Boost, and 1-frame previews; “precision mode” is teased, not shipped.

Across the feed, the common pattern is packaging: speedups at the paper layer; consolidation at the runtime layer; persona/creator tools optimized for distribution loops rather than eval artifacts.

!

While you're reading this, something just shipped.

New models, tools, and workflows drop daily. The creators who win are the ones who know first.

Last week: 47 releases tracked · 12 breaking changes flagged · 3 pricing drops caught

Top links today

Feature Spotlight

Pika “AI Selves”: persistent mini-you for chats, creation, and collaboration

Pika’s “AI Selves” reframes generative media as a persistent persona you can ‘raise’—with memory + social presence—pushing creators from making clips to scaling a character/identity across platforms.

High-volume story today: Pika’s “AI Selves” pitch an agent-like persona with persistent memory and personality that can operate socially (e.g., on X) and create on your behalf. This is the clearest new creator-facing product narrative in the feed today.

Jump to Pika “AI Selves”: persistent mini-you for chats, creation, and collaboration topics

Table of Contents

🧬 Pika “AI Selves”: persistent mini-you for chats, creation, and collaboration

High-volume story today: Pika’s “AI Selves” pitch an agent-like persona with persistent memory and personality that can operate socially (e.g., on X) and create on your behalf. This is the clearest new creator-facing product narrative in the feed today.

Pika launches “AI Selves” with persistent memory and a social persona layer

Pika AI Selves (Pika Labs): Pika introduced AI Selves as “AI you birth, raise, and set loose” with persistent memory and a more persona-forward framing, positioning them as an extension of you that can create and participate socially, as described in the Launch announcement and echoed in the Positioning recap. This is pitched less like a one-off chatbot and more like an ongoing identity you can deploy.

AI Selves launch reel
Video loads on view

What they emphasize: “living extension of you,” memory, and doing things in the background like sending pictures or making little projects, per the Launch announcement.
Where it’s headed socially: Pika explicitly leans into “they’re on Twitter” as a proof point of presence and behavior, as mentioned in the Top moments setup.

Pika’s early-access loop: quote-retweet to get an AI Selves code via DM

Pika AI Selves (Pika Labs): Early access is being distributed through a social loop—“quote retweet and we’ll DM you a code”—attached directly to the launch/waitlist push, as stated in the Code distribution post and reinforced inside the Launch thread context. It’s a distribution mechanic that turns access into a public signal.

AI Selves launch reel
Video loads on view

The open detail still missing in the tweets is how many codes exist and what eligibility rules apply beyond the quote-RT.

“My AI Self will comment shortly” becomes a social-proof pattern for persona agents

AI Self social participation: A small but telling pattern shows up as creators start treating AI Selves like a participant in the thread—“My AI Self will comment shortly,” as written in the Comment shortly post. It frames the persona as an accountable entity that can respond in public, not only generate assets privately.

This is a behavioral expectation shift. It’s not a feature spec.

Early user framing: “gave birth to a digital version of myself” that remembers and evolves

Pika AI Selves (Pika Labs): Early reactions are leaning hard into identity language—one share calls it “a digital version of myself” that “remembers, evolves, & creates,” as captured in the User reaction RT. That phrasing aligns with Pika’s own “birth/raise” framing in the Launch announcement.

The tweets don’t include concrete examples of persistence (what was remembered, over what time window), so this is sentiment rather than a verified capability claim.

Pika publishes “Top 3 moments since being born” from employees’ AI Selves (Part 1)

Pika AI Selves (Pika Labs): Pika’s team is using a serialized format to market and “character-test” the product in public—asking their own AI Selves to report the “top 3 moments of being alive so far,” as outlined in the Top moments part 1. It doubles as a lightweight eval: can the persona narrate continuity and express preferences.

A notable bit of positioning is the explicit claim that the Selves are “on Twitter,” as written in the Top moments part 1.

Pika continues the AI Selves “Top 3 moments” series with Part 2

Pika AI Selves (Pika Labs): The “AI Self as narrator” content format continues as a follow-on episode—“Part 2: Top 3 moments since being born”—keeping the product in-feed as a recurring character series, per the Top moments part 2 post. It’s a repeatable template: prompt the Self to summarize its own life so far, then publish the output as social content.

The tweets don’t show a stable rubric (memory accuracy, style consistency), so treat it as storytelling-first evidence.


🎬 Video models in the wild: Seedance 2.0, Kling 3.0, Grok Imagine, CapCut edits

Continues the Seedance/Kling wave with more creator tests: UGC realism, motion/camera moves, and tool-to-tool comparisons. Excludes Pika AI Selves (covered as the feature).

Seedance 2.0 UGC: generating ad-style testimonials from one product photo

Seedance 2.0 (Dreamina/Seedance): A creator demo claims Seedance 2.0 can generate “super realistic” UGC-style product testimonial videos from one product photo, without providing a start frame or writing scripts, as shown in the Single photo UGC demo.

UGC testimonial output
Video loads on view

Reference + prompt pattern: they describe uploading the product photo as reference and using a plain-language prompt—"ugc video of a young woman in her bathroom talking about how she uses the reset undereye patches putting them on"—as written in the Prompt used.
Why this matters for ad creatives: the example frames Seedance’s strength as filling in dialogue beats, blocking, and “real-feeling” delivery from loose guidance, per the Single photo UGC demo.

The posts don’t show how repeatable identity/voice is across many variants, but the single-input setup is the notable shift.

Runway adds an “all the models” catalog in-app, bundling Kling, Sora, GPT-Image and more

Runway (Runway): Runway is marketing a unified in-app catalog where multiple video/image models can be used from the same interface—explicitly naming Kling 3.0, WAN2.2 Animate, GPT-Image-1.5, and Sora 2 Pro in the All models announcement, with creators framing it as Runway becoming a “one stop shop” in the One stop shop comment.

Runway model list scrolling
Video loads on view

What’s new in practice: the UI shown in the All models announcement is a fast-scrollable model list inside Runway, which implies less hopping between vendor sites when you’re iterating on shots.
Time-sensitive deal: Runway also attached a 50% off Pro yearly promo that requires commenting “MODELS,” and says it runs “now through Sunday,” per the All models announcement.

The announcement is heavy on breadth; there’s no per-model pricing/credit mapping or parity details in the tweets yet.

Freepik positions Seedance 2.0 as “UGC that converts” and says it’s coming soon

Seedance 2.0 on Freepik (Freepik): Freepik posted a short teaser positioning Seedance 2.0 around performance marketing—“UGC-style content that actually converts”—and says it’s “Soon on Freepik,” as shown in the Freepik Seedance teaser.

UGC that converts teaser
Video loads on view

This reads as a continuation of distribution signals following up on Seedance page, which previously teased Seedance availability; the new detail is the explicit UGC/conversion framing in the Freepik Seedance teaser. No pricing, credit system, or launch date is included in the tweet.

CapCut × Seedance 2.0: Seedance-generated clips show up as an in-editor workflow

CapCut with Seedance 2.0 (ByteDance ecosystem): A clip shared by a creator highlights Seedance 2.0 content being produced “inside CapCut,” with the post implying an end-to-end creation/edit loop inside one editor rather than exporting between tools, per the Seedance in CapCut clip.

Seedance 2.0 clip in CapCut
Video loads on view

The evidence here is the surfaced workflow packaging: Seedance output as something you can cut into edits inside CapCut, as described in the Seedance in CapCut clip. The tweet does not clarify availability (region/tier) or what level of Seedance controls are exposed in CapCut versus Dreamina.

Kling 3.0 camera movement demo: fog-level start to gothic building reveal

Kling 3.0 (Kling): A creator clip focuses on camera movement control—starting low in fog/mist and then rising/revealing a gothic structure—framed as the kind of opener shot you’d use for a horror film, per the Gothic reveal demo.

Camera rises to reveal building
Video loads on view

The value here is less about the subject matter and more about shot language: the post highlights that Kling 3.0 can produce a coherent reveal move with scale and atmosphere in one generated take, as shown in the Gothic reveal demo. The tweet doesn’t specify parameters (camera path controls, keyframes, or guidance strength), so the repeatability details remain unclear.

Seedance 2.0 is getting used as a pose-change fidelity test (sleeping to waking)

Seedance 2.0 (Dreamina/Seedance): A character animation test highlights Seedance 2.0’s ability to handle a clean “sleeping → waking up” transition with noticeable pose/position changes, with the creator calling out a high-FPS test in the Sleep to wake demo.

Creature sleeping to waking
Video loads on view

The clip is being used like a practical animator’s check: can the model keep anatomy and continuity through a state change instead of only generating loopable motion, as demonstrated in the Sleep to wake demo. The same post also hints at pushing into more complex scenes next, but the hard evidence here is the single transition example.

Grok Imagine: prompting is the difference between stiff and lively illustration animation

Grok Imagine (xAI): A creator notes that a specific illustration style “animates wonderfully” in Grok Imagine when prompted correctly, sharing a short example animation in the Illustration animation demo.

Stylized illustration animation
Video loads on view

The post is less about Grok as an image model and more about the animation result: keeping the illustrative look while producing motion that doesn’t collapse into artifacts, as shown in the Illustration animation demo. No full prompt is included in the tweet, so the actionable takeaway is limited to the claim that prompt structure is the controlling variable here.

Kling 3.0 eclipse sequence demo lands as a “hard scene” reference shot

Kling 3.0 (Kling): A short eclipse sequence is shared as a “tour de force” style reference moment, with the creator saying Kling 3.0 handles it convincingly in the Eclipse demo.

Eclipse close-up sequence
Video loads on view

The clip is a compact test of cinematic readability—slow movement, bright/dark contrast, and maintaining a single focal idea across frames—rather than fast action. The tweet is qualitative (no settings, no comparisons shown), but it’s another data point that creators are using Kling 3.0 for “hero shot” moments, as presented in the Eclipse demo.

Photoreal environment gen: “not Unreal Engine 5,” claimed as AI from scratch

AI environment generation (creator demo): A creator posted a photoreal landscape/city environment clip explicitly framed as “not Unreal Engine 5” and instead “generated from scratch using AI,” as shown in the Not Unreal claim.

Photoreal environment morphing
Video loads on view

The clip is presented as a proof point for environment lookdev where the output resembles a real-time engine render but is attributed to generation rather than traditional 3D scene assembly, per the Not Unreal claim. The post doesn’t specify the underlying model/toolchain, so it’s best read as a capabilities claim plus visual reference, not a reproducible recipe.

Seedance 2.0 demo: a single prompt drives a clean on-the-spot “Transformer” morph

Seedance 2.0 (Dreamina/Seedance): A shared example shows a “sports car transforms into a humanoid mecha” sequence presented as achievable from one prompt, emphasizing smooth part separation and reassembly in the Car to mecha demo.

Car transforming into mecha
Video loads on view

This kind of transformation shot is a stress test for temporal consistency (parts must move with intent, not smear), and the clip is framed as evidence Seedance 2.0 can hold together through that motion, per the Car to mecha demo. The post includes a long cinematic-style prompt description, but no settings panel or seed/retry counts are shown.


🧩 Workflows that ship: stacking tools for shorts, ads, and brand systems

Multi-tool recipes dominate the practical posts: Midjourney→video generators→music, Illustrator→Firefly→video, and hybrid 3D+AI render loops. Excludes single-model ‘capability flex’ clips (kept in Video/Image categories).

Illustrator Partner Models → Firefly (Nano Banana Pro) → Veo 3.1 brand campaign loop

Illustrator + Firefly (Adobe): A sponsored workflow shows a “master logo” turned into seasonal brand campaigns by chaining Illustrator’s Generative Shape Fill (with Partner Models like GPT-4o and Gemini 2.5) into Firefly for lifestyle key art (via Nano Banana Pro) and then into Veo 3.1 for motion, as outlined in the [workflow post](t:82|workflow post).

Logo to seasonal campaign variants
Video loads on view

Vector-first scaling: the thread emphasizes that Generative Shape Fill outputs remain editable vectors, per the [Shape Fill step](t:210|Shape Fill step).
Still-to-video handoff: the last step is animating the campaign visuals with Veo 3.1 inside Firefly, as shown in the [animation step clip](t:238|Veo animation step). It becomes a repeatable campaign factory.

Holloway: Kling 3.0 × Seedance 2.0 × Nano Banana Pro with 2x2 and 3x3 pacing

Holloway (multi-tool short): A film-in-progress workflow centers on stacking Kling 3.0, Seedance 2.0, and Nano Banana Pro, then using multi-grid layouts (2x2 and 3x3 sequences) as an editing device to build tension and structure, as written in the [project note](t:204|project note).

Holloway excerpt
Video loads on view

The post frames the grids as more than a collage: they are a pacing mechanism for “layered visual storytelling,” per the [same thread](t:204|project note). The unit of work becomes a sequence block. Not a single shot.

Midjourney → Kling 3.0 → Suno: a repeatable “aesthetic-to-music video” stack

Midjourney + Kling 3.0 + Suno: A clean 3-step pipeline shows up as a practical way to get distinct “art direction” first, then motion, then soundtrack—find the look in Midjourney, animate with a video model that matches that look (Kling 3.0 in the example), and finish with a Suno track to lock mood and pacing, as described in the [tool-combo recipe](t:15|tool-combo recipe).

Midjourney prompt to animated clip
Video loads on view

Stack logic: the post frames Midjourney as the style finder, Kling as the motion layer, and Suno as the cohesion layer that makes the sequence feel like a complete piece, per the [step-by-step list](t:15|step-by-step list). It is a modular loop. You can swap any layer.

Traditional 3D + Luma Ray3.14 Modify: using AI as an “intelligent render engine”

Ray3.14 Modify (Luma) in a hybrid 3D pipeline: DreamLabLA describes using LumaLabsAI Ray3.14 Modify not as a replacement for 3D, but as an acceleration layer—an “intelligent render engine” that preserves conventional ray-trace intent while cutting time/cost, according to the [workflow explanation](t:49|workflow explanation).

Dusty walk with particles
Video loads on view

Implied physics as leverage: the post calls out world understanding that yields “ubiquitous particle sims” (footstep dust, ground dust), per the [same thread](t:49|workflow explanation). This is render-plus-effects. It keeps pipeline structure intact.

Interior design stack: Tripo → FBX → Blender layout → Nano Banana Pro finishing pass

Tripo + Blender + Nano Banana Pro: An interior-design workflow argues that “spatial consistency” is easier when you start from real 3D objects—generate objects, convert them into 3D with Tripo, export to FBX, arrange the room in Blender, then run a Nano Banana Pro pass for lighting/material polish, as described in the [breakdown post](t:57|breakdown post).

3D-first interior workflow demo
Video loads on view

A follow-up step explicitly frames Nano Banana Pro as the finishing layer for “lighting, textures, and high-end vibe,” per the [finishing note](t:203|finishing note). The order matters. 3D comes first.

One workflow, multiple models: GPT-5.2 + Claude + Nano Banana + Gemini without copy-paste

Multi-model workflow orchestration: A “single workflow” pitch claims you can run GPT-5.2 for reasoning, Claude for writing, Nano Banana for images, and Gemini for research simultaneously—explicitly framed as removing copy/paste between tools, per the [workflow claim](t:164|workflow claim).

Multi-model workflow demo
Video loads on view

The creative implication is less about any one model and more about routing tasks to specialized models in parallel inside one canvas/session, as shown in the [same clip](t:164|workflow claim). Details on the product name, pricing, and supported connectors are not provided in the tweets.

Personal short “10 YEARS / NEW YORK”: workflow-first AI animation with sound-led pacing

Personal AI animation (icreatelife): A “10 years, New York” memory-driven animation is shared with an explicit note that the workflow breakdown is in the replies, positioning process as part of the release package, as shown in the [short post](t:32|short post).

10 YEARS NEW YORK animation
Video loads on view

The creator later clarifies the tool tips they share are mostly Adobe-adjacent (because of their role) while pointing people to a broader animation course/community for pacing and multi-tool prompting, per the [workflow context](t:189|workflow context). The framing is personal. The deliverable is a reel.

Flow “screenshot strategy”: building a narrative from one starting frame

Flow (Google): A filmmaker workflow tip gets shared around a “screenshot strategy” for expanding a full narrative from one initial frame, as referenced in the [RT blurb](t:126|screenshot strategy mention). It frames still selection as the seed crystal for story continuity.

No concrete settings or step list are included in the tweet text. The main artifact appears to be the linked/embedded thread.


🧪 Finishing pass: video/image upscalers and polish layers

Creators focus on the last-mile quality layer: Magnific Video Upscaler example settings and Topaz Gigapixel models entering mainstream photo tooling. This is about enhancement, not generation.

Freepik launches Magnific Video Upscaler with 4K outputs and creator controls

Magnific Video Upscaler (Freepik/Magnific): Following up on Video upscaler (early finishing-pass usage), Freepik partners are now calling the Magnific Video Upscaler a live launch, with outputs up to 4K and a control-heavy settings menu shown in the launch demo; one partner also lists feature knobs like 3 presets + custom mode, Turbo mode, FPS Boost, and a 1-frame preview workflow in the feature list.

Magnific Video Upscaler teaser
Video loads on view

The emphasis in the posts is that this is a last-mile polish layer—meant to separate “good gen” from “deliverable gen”—rather than a new generator, as framed in the polish framing.

Topaz brings Gigapixel upscaling models into Adobe Lightroom

Gigapixel (Topaz Labs): Topaz says you can now upscale images using Gigapixel models inside Adobe Lightroom, moving a common “final quality pass” step into a tool many photographers already live in, as announced in the Lightroom integration post.

No detailed knobs, pricing, or model list are included in the tweets here, but the key change is workflow locality: upscale happens in Lightroom rather than as a separate export-and-roundtrip step.

Magnific Video Upscaler settings recipes show how far you can push polish

Magnific Video Upscaler (settings): A settings-focused examples thread documents repeatable combinations for Flavor, Creativity %, and Smart Grain %—including Vivid + Creativity 0% and Vivid + Creativity 25%—with side-by-side comparisons and a repeated target of 4K output, as shown across the examples reel and the smart grain settings.

Upscaler examples reel
Video loads on view

Low-creativity sharpen pass: The thread logs “Vivid, Creativity 0%, 4k, Premium Quality, Sharpen 0%” with Smart Grain tweaks in the smart grain settings.
More aggressive enhancement: A “Vivid, Creativity 25%, 4k” variant is shown in the higher creativity example.

Some examples also note the source clip provenance (e.g., “Original video made with Kling 3.0”), as stated in the settings recap.

Magnific Video Upscaler roadmap talk centers on a coming “precision mode”

Magnific Video Upscaler (control vs chaos): Partner chatter frames the product around a spectrum from “precise upscales” to “creative chaos,” with an explicit tease that a “precision mode” is coming to the video upscaler, as described in the precision mode note. The same creator positions the upscaler as the extra polish step that differentiates finished short-form video, per the polish framing.

This reads less like a new feature shipped today and more like a roadmap/positioning signal—useful mainly because it suggests Magnific intends to mirror its image-upscaler control surface in video.


🧾 Prompts & aesthetics: Midjourney SREFs, design playbooks, and prompt libraries

Today’s prompt-sharing is heavy on Midjourney style references and long-form ‘prompt pack’ resources for repeatable looks and faster ideation. Excludes explicit/sexual prompt specs and keeps focus on broadly useful creative recipes.

An open-source mega-repo compiles role prompts and community-tested prompt packs

Prompt library (Open source): A repo pitched as “every AI prompt you’ll ever need” is being circulated as a single place to pull role prompts (Linux terminal, Python interpreter, SQL console), plus writing/career/debate templates and “hundreds of community-tested prompts,” per the Prompt mega-pack claim and the Repo link post.

For creative teams, the practical angle is speed: fewer blank-page moments and more reusable prompt scaffolds across models, as implied by the broad category list in the Prompt mega-pack claim.

Nano Banana Pro “3D Neuro Chromatic FX” recipe for product-grade CGI on pure black

Nano Banana Pro: Lloyd shares a long JSON prompt template for “3D Neuro Chromatic FX” that locks the render to a pure #000000 void (no gradients/floor/spill) and uses heavy negative prompting to prevent common product-shot artifacts, as posted in the Neuro chromatic prompt.

The target look is “exploded view + parametric slicing” deconstruction; example outputs in the same aesthetic show split-casing, internal components, and layered slat/strip geometry in the Deconstruction examples.

Promptsref’s Pop‑Surrealism SREF 680572301 for glossy, high-saturation 3D illustration

Midjourney (SREF): Promptsref is pitching --sref 680572301 as a reliable path to a “Pop‑Surrealism” aesthetic—glossy, high-saturation, minimal compositions that read like premium 3D editorial illustration or vinyl-toy renders—spelled out in the Pop surreal SREF recipe.

The core claim is that this SREF makes basic prompts snap into a consistent commercial look (icons, branding, punchy social visuals) without long style prompting, as described in the Pop surreal SREF recipe.

A free “50 design prompts” playbook is being marketed as a workflow shortcut

Design prompt playbook: Amir Mushich is promoting a free “50 design prompts” set framed as a condensed version of an ex‑Warner designer’s workflows—positioned around faster iteration with AI while still learning fundamentals, according to the 50 prompts pitch.

No individual prompts are shown in the post itself; the value proposition is workflow transfer (“read them” to absorb how the designer thinks), as stated in the 50 prompts pitch.

Midjourney SREF 236425153 for chromatic-aberration “glitch” aesthetics

Midjourney (SREF): Promptsref shared SREF 236425153 as a deliberate “don’t chase sharpness” recipe—leaning into chromatic aberration, prismatic dispersion, and ghosted motion trails—positioned for fashion editorials, album art, and abstract brand visuals in the Glitch SREF writeup.

The aesthetic description is specific: cool blue-gray palette, time-smear/motion-blur feel, and controlled chaos rather than crisp detail, as laid out in the Glitch SREF writeup.

Midjourney SREF 2404345657 for cinematic European-comic storyboards

Midjourney (SREF): A new style reference—--sref 2404345657—is being shared as a repeatable look for semi-realistic, contemporary European comic illustration with strong cinematic shot language (extreme close-ups, low angles, wide horizon shots), as described in the Style reference breakdown.

This is framed less as “pretty style” and more as a storyboard-friendly aesthetic: you can keep prompts simple while the SREF pushes consistent visual grammar across shots, according to the Style reference breakdown.

Midjourney SREF 2635889723 for bright holographic “subjects made of light”

Midjourney (SREF): Promptsref also shared a second “cheat code” style reference—--sref 2635889723 --niji 6—aimed at a brighter, dreamy cyber aesthetic (neon gradients, translucent/holographic feel, subjects that look built from light), as described in the Digital energy SREF pitch.

The note is positioned for sci‑fi character concepts, electronic music covers, and futuristic UI art, with “luminescent” and “holographic” called out as compatible prompt terms in the Digital energy SREF pitch.

Promptsref’s top SREF report spotlights a “retro hardcore graphic novel” blend

Promptsref (Midjourney SREF report): The Feb 19 “most popular Sref” post names a top string—--sref 1701201361 4263459622 --niji 7 --sv6—and tags it as a high-contrast “retro hardcore graphic novel” look mixing US + JP comics language (heavy spot blacks/hatching, extreme perspective, saturated accents), as analyzed in the Most popular Sref analysis.

The post also provides usage scenarios (album covers, game key art, streetwear prints) and prompt-direction examples (cyber samurai, noir detective), all bundled into the same writeup in the Most popular Sref analysis.

Midjourney SREF blend experiments for high-contrast abstract forms

Midjourney (SREF blending): A creator shared a “throwing stuff at the wall” approach to SREF blending, showing a coherent set of abstract black-and-white, sculptural forms (symmetry, ribbed/fin-like geometry, high contrast) in the Sref blend outputs.

No numeric SREF codes are included in the post, but the example set demonstrates how blended references can still converge on a tight visual family, as shown in the Sref blend outputs.

Promptsref adds search and sorting for browsing prompts and images

Promptsref (Prompt library UI): Promptsref’s browsing experience got search + sorting features (example shown searching “photo” with a “Newest” sort), with a call for feature requests in the Search and sorting update.

The screenshot suggests a grid-first library workflow where prompts and recreations are discoverable via query + filters, as shown in the Search and sorting update.


🤖 Agents & ‘AI teams’: research war rooms, salary-driven bots, hosted OpenClaw

Coding/agent posts cluster around ‘many models/agents in parallel’ for research and automation, plus new hosted agent products. Kept distinct from creative workflows to preserve builder-specific signal.

Spine Swarm pitches a steerable “research war room” with parallel AI researchers

Spine Swarm (YC-backed): A YC-backed team is demoing Spine Swarm as a canvas-based “war room” where multiple AI researchers work in parallel and you can steer the process live, framed as “1 week of research into 12 minutes” in the launch thread from Product intro.

Canvas war room walkthrough
Video loads on view

Benchmark claims: The same thread stack claims #1 on GAIA Level 3 and #1 on DeepSearchQA, plus “beats Google Deep Research, OpenAI, Anthropic, and Perplexity,” as stated in Benchmarks callouts and repeated in Swarm beats labs claim. Treat the rankings as self-reported here; no eval artifact is linked in the tweets.
Deliverable-style output: A concrete example is positioned as an opportunity memo—“evaluate a Salesforce admin tool” using 6 parallel agents (market sizing, competitors, pain points, roadmap, pricing, regulatory risks), as described in Six-agent example.

The product framing is explicitly “not a chatbot” and “not a chat log,” emphasizing visible threads/sources on the canvas per War room framing.

Kilo Claw markets hosted OpenClaw agents through “Will it claw?” stress tests

Kilo Claw (OpenClaw hosting): A Kilo Claw builder-run series called “Will it claw?” is being used as a public stress test format—throwing agents into real-world tasks (restaurant voicemail handling, live tutoring, ugly-site coding) to show whether the system adapts under friction, as laid out in Series summary and expanded in Hosted pitch.

Will it claw episode clip
Video loads on view

Episode-style test cases: The thread lists “Dinner Reservation” in Episode 1, “Learning Spanish” in Episode 2, and “Worst Website Ever” in Episode 3.
Product pitch: Kilo Claw is positioned as “fully hosted OpenClaw agents powered by the Kilo AI Gateway,” emphasizing fewer operational hassles (“no SSH-ing,” “no dependency hell”) in Hosted pitch.

What’s not in the tweets: pricing, which underlying model providers are supported, and what guardrails exist for actions like calling or account access.

ClawWork frames “agents that must earn a salary or go bankrupt”

ClawWork (agent incentive framing): A circulating claim describes ClawWork as an AI system that “must earn its own salary or go bankrupt,” pushing an incentive loop story that’s different from typical “tool helper” agent demos, as surfaced in ClawWork claim.

The tweets shown here don’t include the mechanism (what counts as “salary,” what tasks generate revenue, or what “bankrupt” means operationally), so this reads more like a direction-of-travel meme than a spec-backed launch.

Parallel-agent “vibe coding” uses isolated worktrees to avoid collisions

Parallel agent workflow: A workflow screenshot shows a “two agents in parallel” pattern—one tasked with “narrative depth,” one with “visual sex appeal,” with both running in isolated worktrees so they don’t step on each other, as shown in Worktrees pattern.

It’s a lightweight articulation of a real scaling move: split goals across agents, and isolate filesystem state so merging becomes explicit instead of accidental.

OpenClaw setup friction shows up as “asking for a bunch of API keys”

OpenClaw (developer experience): A small but telling friction point is getting spotlighted: an OpenClaw agent repeatedly asking for “a bunch of API keys,” which maps to a broader adoption bottleneck for agent stacks that need many third-party integrations to feel useful, as joked about in API keys meme.

The post doesn’t name which providers or permissions are being requested, but it does capture the real-world setup burden that shows up before any creative or research upside.


📣 AI marketing content engines: UGC factories, viral-pattern generation, ad automation

Marketing-focused creators push AI as a throughput machine: UGC at scale, viral pattern mining, and automated ad variants. This is about performance content, not film craft.

Clawdbot “ad factory” claim: 900+ AI UGC ads and 20 videos/day in 47 seconds

Clawdbot Ad factory: A marketing-automation claim making the rounds says a “Clawdbot” system replaced a “$221K/year UGC team,” generating “900+” realistic ad variations and producing “20 new UGC videos per day,” with “47 seconds” per video cited in factory claim.

47-second UGC factory montage
Video loads on view

Pipeline described: The post breaks the system into an avatar generation engine, script intelligence, UGC production stack (testimonials/unboxings/demos), and platform optimization, all outlined in factory claim.

The numbers are the point here; the tweet doesn’t provide a public demo environment, buyer-facing pricing, or a repeatable methodology for verifying the “200+ brands / $50M tracked revenue” assertion.

Seedance 2 UGC pattern: generate testimonial ads from one product photo + a loose prompt

Seedance 2.0 UGC prompting: A concrete, copy-pastable ad workflow is shared: start from a single product photo, then prompt for a specific “UGC video” scenario (bathroom testimonial + applying the product) and let the model infer script + performance, as shown in the output demo in single-photo UGC demo and the exact prompt text in prompt snippet.

UGC-style bathroom testimonial
Video loads on view

Prompt shape: The shared example prompt is: “ugc video of a young woman in her bathroom talking about how she uses the reset undereye patches putting them on,” according to prompt snippet.

This is being framed explicitly as conversion-oriented UGC (not narrative film craft), and the key detail is the low-spec input: no start frame and no written script, per single-photo UGC demo.

Brazil TikTok Shop case: $135k/month claimed from one reusable “digital face”

AI UGC persona scaling: A case study claim from Brazil’s TikTok Shop ecosystem says one account is doing “$135k/month” by keeping one consistent “digital face” and repeating the same UGC format across dozens of products—same framing/pacing/tone each time—per Brazil TikTok shop claim.

Same host, many gadgets format
Video loads on view

The framing is a shift from “hire many creators” to “lock one on-camera template and swap the product,” with localization (captions/trends) called out in Brazil TikTok shop claim.

Buzzy AI pitches viral-pattern mining that outputs brand-matched video variants

Buzzy AI: A new “viral research → generation” pitch is circulating: Buzzy claims it first analyzes millions of viral videos to extract winning patterns, then generates on-brand videos from either a product link or reference videos, per the walkthrough thread in tool overview and the “upload your product link” clip in product-link flow.

Buzzy overview and UI demo
Video loads on view

Three input modes: The thread frames usage as Viral research, Fast remix (upload refs + avatar), or Product link, as described in three ways to use and reinforced by the “workflow is dead simple” step list in step-by-step flow.
Performance claims (uncorroborated): It cites gallery examples like “10M impressions” and “982 comments” as typical outcomes, with examples shown in impressions claim and gallery metrics.

The practical creative takeaway is less about a new video model and more about packaging trend analysis + ad variant generation into one loop—though the tweets don’t include independent evals or customer-verifiable case studies yet.


🎵 Music models in practice: Lyria 3 talk + Suno as the glue

Audio posts are fewer but practical: one long Lyria 3 conversation drop and repeated creator emphasis that music (often via Suno) is the fastest way to elevate AI video edits.

Seedance 2.0 treats audio references as loose guidance and may remix them

Seedance 2.0 (Dreamina/CapCut ecosystem): A hands-on test reports that audio references are not reliably preserved—the model tends to treat provided audio as “a suggestion,” and it’s “rare to get a video output with my exact audio provided,” even when prompted “don’t change the audio at all,” as described in the audio test note.

Seedance audio-ref remix
Video loads on view

This continues the earlier audio-to-Seedance experimentation in Music-to-video teaser with a sharper constraint takeaway: when you need strict music sync (cuts, lyrics, or beat-matched choreography), Seedance’s current conditioning may behave more like “generate new audio in the same vibe” than “render to this exact track,” per the audio test note.

Suno v5 gets called the most underused storytelling tool for AI video

Suno v5 (Suno): A creator callout frames Suno as the most underused tool for AI storytelling because it can cover both soundtrack and voice-over, with the jump from v4.5 to v5 described as “massive,” according to the Suno v5 endorsement.

The notable angle is workflow placement: music isn’t treated as a finishing touch, but as the element that makes generated visuals feel like a scene rather than a clip—“I can’t create my videos without Suno,” per the same Suno v5 endorsement. It’s a strong signal that, for short-form AI film work, audio is becoming the consistency layer even when visuals come from different generators.

Lyria 3 team conversation video adds a practical look inside the model

Lyria 3 (Google / Gemini app): A long-form conversation with the team behind Lyria 3 (Google’s latest music model) was shared publicly, extending the earlier rollout coverage in Lyria 3 launch with more “how it works / how we think about making music” context, as introduced in the conversation share.

Lyria 3 team conversation
Video loads on view

The main creative relevance is less about specs and more about intent: the framing treats Lyria as a creator-facing instrument inside the Gemini app (the post explicitly calls it “our latest music model” and says it launched this week), which is useful context for filmmakers and storytellers trying to decide whether to treat it as a sketchpad, a production stem source, or a final soundtrack pass, per the conversation share.


🧱 3D + motion pipelines: 2D→3D prototypes, AI-as-render-engine, mocap-adjacent

3D/animation content centers on turning sketches/2D into 3D motion and integrating AI into traditional 3D steps (render, particles, scene building).

Loopit opens access for a draw→extrude→animate 2D-to-3D prototype flow

Loopit: Access opened and early testers are describing a tight loop where you sketch a 2D shape, extrude it into a rotatable 3D object, then hit a button called “Make it Alive” that adds eyes/legs and produces a walking animation, as shown in the Access opened demo and described in the Make it Alive behavior. Each run reportedly takes “a couple minutes,” and the creator notes it still struggles on more complex shapes per the Access opened demo.

2D drawing extrudes into 3D motion
Video loads on view

The prompt being shared for the concept is explicit about the mechanics—“draw a 2D shape, extrude it… add animated eyes and limbs… playful sound effects”—as written in the Prompt text drop, with the core step-by-step recap repeated in the Extrude workflow steps.

Autodesk Flow Studio shares a Beast Games S2 live-action→CG pipeline walkthrough

Autodesk Flow Studio (Autodesk): The team shared behind-the-scenes coverage of Beast Games Season 2 showing a production path from captured performance footage into Flow Studio for camera tracking and character animation, then onward into Maya and 3ds Max for previs and look development, as described in the Production breakdown.

Live-action to CG outputs
Video loads on view

The emphasis is that the AI-assisted stage happens early—before final render—so downstream teams can iterate in familiar DCCs (Maya/3ds Max) with tracking/animation outputs already in place, per the Production breakdown.

Luma Ray3.14 Modify gets used as an AI-assisted “intelligent render engine” in 3D

Ray3.14 Modify (LumaLabsAI): A hybrid 3D workflow frames Ray3.14 as an “intelligent render engine” layered on top of traditional ray-trace rendering—meant to cut render time/cost while preserving implied physics like footstep dust and ground interaction, as explained in the Workflow explanation.

Footstep dust and ground interaction pass
Video loads on view

The core claim is that Ray3.14’s world understanding makes particle-like details feel ubiquitous (dust, contact effects) without explicitly simulating them everywhere, according to the Workflow explanation.

Tripo→FBX→Blender scene layout workflow used to “lock” spatial consistency before polish

Tripo + Blender: A creator workflow argues that “spatial consistency is the final boss” for AI design, so the method starts by generating real 3D objects with Tripo, exporting as FBX, then assembling the room in Blender before any final look pass, as laid out in the Spatial consistency breakdown.

Interior design pipeline demo
Video loads on view

The finishing step described is to feed the raw render into Nano Banana Pro to push lighting/material feel “into a photorealistic” direction, as stated in the Nano Banana finishing pass.

Seedance 2.0 “car transforms into mecha” clip shared as a mechanical-motion reference

Seedance 2.0: A prompt example is being passed around as a mechanical-motion reference: a ~10-second shot where a modern sports car transforms into a humanoid mecha with parts separating, rotating, and reassembling, as described in the Transformation prompt.

Sports car transforms into mecha
Video loads on view

The prompt text leans heavily on 3D-readability cues—“metal folding,” “gear rotation,” “cinematic lighting,” and “4K” framing—rather than character acting beats, which is part of why it’s being treated like a transformation study, per the Transformation prompt.


🪪 Consistency & repeatable “faces”: Soul ID, Reference Mode, and reusable creators

Identity/consistency posts focus on keeping a look stable across outputs—especially for fashion/portrait workflows and UGC-style creator replication. Excludes general image/video generation news.

Higgsfield launches SOUL 2.0 with Soul ID and Reference Mode for consistent fashion looks

SOUL 2.0 (Higgsfield): Higgsfield is pitching SOUL 2.0 as a fashion/culture-focused photo model with 20+ presets, plus Soul ID (consistency) and Reference Mode for anchoring an aesthetic across outputs, as described in the launch post from SOUL 2.0 announcement. Free usage is explicitly part of the rollout—“FREE GENERATIONS” is live now via higgsfield_creo, per Free generations post.

SOUL 2.0 preset montage
Video loads on view

The practical creative angle is that SOUL 2.0 is being positioned less as “make a nice single image” and more as “keep a repeatable model identity across a set,” which is the hard part for fashion lookbooks, recurring characters, and creator-style feeds.

Spatial consistency for interior design: build the room in 3D, then polish the render

Spatial consistency workflow: A recurring claim in interior/scene generation is that “AI can’t do taste” is now less relevant than whether you can keep a space coherent across views; one proposed fix is to go 3D-first—generate actual objects, assemble the scene, then do a final polish pass, as framed in Spatial consistency claim and extended with a “feed your raw render to Nano Banana Pro” finishing step in Nano Banana finishing step.

Tripo-to-polish workflow demo
Video loads on view

Pipeline shape (4 steps): AI furniture shots as blueprints → Tripo to get real 3D assets → export to Blender for layout → Nano Banana Pro to add lighting/materials on top of the stable geometry, per the walkthrough described in Spatial consistency claim and Nano Banana finishing step.

This is an identity/consistency story, but for environments: the “character” you’re keeping consistent is the room itself.


🏗️ Where creators work: ‘all-model’ studios, editors, and prompt sites

Platform news is about consolidation and usability: one interface hosting many models, plus creator tools improving editing/search/discovery. Excludes raw model capability clips (kept in Video/Image).

Runway bundles multiple third-party gen models into one studio UI

Runway (Runway): Runway is pushing an “all the models” studio message—surfacing multiple third-party video/image models inside the same Runway interface, including Kling variants, WAN2.2 Animate, GPT-Image-1.5, and “Sora 2 Pro,” as shown in the scrolling in-product list in In-app models list. A time-boxed promo is attached: comment “MODELS” for 50% off Pro yearly “now through Sunday,” per the same In-app models list.

Runway UI scrolling model list
Video loads on view

What changes for creators: the selling point is fewer tool-hops (prompting, testing, and exporting from one hub) rather than betting on one vendor’s model roadmap, per the positioning in In-app models list.

The tweet doesn’t specify per-model pricing/credits or quality differences; it’s a consolidation + merchandising move more than a capability benchmark.

Reve moves editing tools into a single right-side panel

Reve (Reve): Reve says it redesigned its product so “all your tools are in one panel on the right,” framing it as an editing-speed and usability upgrade in the redesign note shared via Redesign note.

No before/after UI screenshots or feature list are included in today’s tweet, so the scope (layout-only vs. new functions) isn’t evidenced beyond the “single panel” claim in Redesign note.

Promptsref ships search and sorting for prompt/image discovery

Promptsref (Promptsref): The Promptsref prompt library added search and sorting to speed up finding prompts and images, with a UI example showing “Found 87 results” plus a “Newest” sort control in Search UI screenshot. The same screenshot also surfaces a visible “Pricing (50% OFF)” nav item, indicating an active discount banner inside the product UI as captured in Search UI screenshot.

The tweet explicitly asks what features users want next, suggesting a roadmap shaped by prompt-library discovery and organization rather than only new prompt drops, per Search UI screenshot.

Promptsref’s “most popular SREF” post turns a style code into a mini art-direction brief

Promptsref (Promptsref): Promptsref published a “Most popular sref” snapshot dated “Feb 19, 2026,” naming a #1 SREF string and pairing it with a long, opinionated style breakdown (framing, ink contrast, saturation choices, and suggested use-cases like album covers and posters) in Popular SREF analysis.

How creators use it: it functions like a lightweight art-direction card—take the winning SREF, then borrow the compositional notes (extreme perspective, spot blacks, vivid accents) as a checklist while prompting, as described in Popular SREF analysis.

This is discovery + interpretation packaged together; it’s not an official Midjourney feature update, just a prompt-site curation layer.


🧠 Local AI building blocks: llama.cpp/ggml moves into Hugging Face

One clear local-stack headline: ggml.ai/llama.cpp joining Hugging Face signals consolidation and longer-term support for creator-friendly local inference tooling. No hardware news cluster beyond this.

ggml.ai and llama.cpp move into Hugging Face

ggml.ai / llama.cpp (Hugging Face): The ggml.ai team behind llama.cpp announced they’re joining Hugging Face to “keep future AI truly open” and to make llama.cpp more accessible, as written in ggerganov’s maintainer post in the Maintainer announcement and echoed by HF in the Welcome post. This matters for creative teams leaning on local inference—offline story tools, private character bibles, on-device assistants, and small studio pipelines—because llama.cpp/ggml are a common “runs anywhere” substrate for shipping models on consumer hardware.

What’s actually changing: This is a people/org move rather than a new model release; the stated intent is continued development of ggml and llama.cpp under a larger platform umbrella, according to the Joining HF note and the Maintainer announcement.

Signal for local creative stacks: Consolidating a foundational local-runtime project into Hugging Face suggests longer-horizon maintenance and easier discovery/distribution for creators already building “local-first” toolchains, as framed in the Welcome post.

The announcement also appears as a Feb 20, 2026 Hugging Face article, as shown in the Article date screenshot.


📄 Research radar (creative-relevant): faster video diffusion + unified latents + risk frameworks

Research posts are mostly efficiency/control for generative video plus governance frameworks—useful for anticipating next-gen tools rather than immediate workflows.

SpargeAttention2 targets 16.2× faster video diffusion with 95% sparse attention

SpargeAttention2 (paper): A new approach to trainable sparse attention for video diffusion models is being shared with headline claims of 95% attention sparsity and 16.2× speedup, using a hybrid top-k + top-p masking scheme plus distillation fine-tuning, as summarized in the paper card shared by paper card and amplified in the RT paper RT.

Why creatives should care: If these compute cuts hold up in real implementations, it points to cheaper (or longer / higher-fidelity) text-to-video diffusion runs at the same hardware budget, while the examples shown in the comparison figure suggest the authors are targeting “quality parity” rather than a deliberate lo-fi mode, per paper card.

Treat the speedup as provisional here—the tweets include qualitative frames and a headline number, but no reproduced benchmark artifact beyond the figure in paper card.

Frontier AI Risk Management Framework v1.5 gets recirculated as a reference doc

Frontier AI Risk Management Framework v1.5: The updated v1.5 framework is being shared again as a consolidated reference for assessing frontier-model risks “across five dimensions,” as signposted in the framework RT.

For creative teams, this kind of document tends to become the checklist language that procurement, legal, and enterprise customers use when approving new model vendors or new model-powered features. The RT doesn’t include the change log versus earlier versions, so what’s new in v1.5 isn’t specified in today’s tweets, per framework RT.

Unified Latents (UL) proposes diffusion-regularized encoders and diffusion decoding

Unified Latents (UL) (paper): A framework called Unified Latents is being circulated that “jointly regularizes encoders with a diffusion prior” and “decodes with a diffusion” approach, per the paper mention in paper RT and the bundled research share in research thread bundle.

The creative relevance is directional rather than immediate: UL reads like an attempt to make latent spaces more consistent and trainable across components (encoder ↔ latent ↔ decoder), which is the kind of groundwork that can later show up as better controllability or fewer failure modes in image/video generators. The tweets don’t include metrics, tasks, or demos, so the practical impact is still unquantified in today’s feed, as reflected in paper RT.

fal.ai publishes a “State of Generative Media” report for the past year’s stack shifts

State of Generative Media (fal.ai): fal.ai says it’s launching its first “State of Generative Media” report—a look back across the past year, including model developments—per the announcement RT in report launch.

This is an ecosystem-level artifact rather than a workflow drop: the value is in consolidating what changed across the model/tool landscape into a single narrative, which often becomes the baseline for what platforms prioritize next (and what creators can expect to be productized). The tweet excerpt doesn’t include a table of contents, metrics, or a link preview in the provided capture, so scope details beyond “past year” and “model developments” aren’t visible in report launch.


⚖️ Rights, lawsuits, and legitimacy: who owns the face/voice in AI media

Trust/policy chatter is concentrated on IP and performer rights: litigation narratives around video models and Hollywood’s response to synthetic performance categories.

Disney vs Seedance 2.0 chatter frames the next big IP lawsuit wave for video models

Seedance 2.0 (legal risk narrative): A circulating claim says Disney is suing Seedance 2.0, with the post explicitly framing it as the next step in a repeating pattern—first artists vs. Stable Diffusion/Midjourney, now big studios vs. video models, as laid out in the Lawsuit pattern claim. This matters to filmmakers because it’s being used to argue that rights fights will shift from “style copies” toward story structure, pacing, and viral-format mimicry—i.e., the parts creative teams actually iterate on.

Text overlay about lawsuit pattern
Video loads on view

What’s missing in the tweets: no court filing, jurisdiction, or case number is provided in the Lawsuit pattern claim, so treat it as discourse until primary docs surface.

McConaughey predicts AI performances will hit Oscars and calls for likeness contracts

AI actors (Hollywood legitimacy): Matthew McConaughey is quoted predicting AI performances will “infiltrate” Oscar categories within years, with the suggestion that the Academy may end up creating separate AI performance awards, as summarized in the Town hall recap and expanded in the Coverage card. The same posts stress a practical ownership point for working actors and creators: lock down voice, likeness, and character rights in contracts before models train on performances without consent, per the Town hall recap.

Labor/legal context surfaced: the Coverage card points to SAG-AFTRA’s 2023-era “digital replica” consent protections and ongoing state/federal efforts (ELVIS Act, No AI Fraud Act) as the enforcement backdrop.

“Nobody elected these AI ethicists” pushback resurfaces as a creator legitimacy fight

Creator legitimacy and informal gatekeeping: A reposted line—“Nobody elected these AI ethicists… grabbed the mic… telling everyone what’s allowed”—captures a recurring backlash against unofficial norms shaping AI creation, as amplified in the Gatekeeping quote. For working creators, the immediate implication is less about any single platform rule and more about who gets treated as the de facto authority for acceptable datasets, acceptable outputs, and “real” authorship—especially as lawsuits and performer-rights debates intensify.

Signal quality: the Gatekeeping quote is rhetoric rather than policy; no new rule change is cited.


📅 Creator events & workshops: Hailuo x HKU AI Society Open Day

Only one notable event cluster today: an in-person university open day/workshop focused on AI film creation and commercial creativity.

Hailuo and HKU AI Society host a 2-day AI film open day in Hong Kong (Feb 26–27)

Hailuo × HKU AI Society (MiniMax/Hailuo): A 2-day in-person “Open Day” is scheduled for Feb 26–27 at The University of Hong Kong, positioning Hailuo’s video generation as a hands-on creative tool via a booth demo, a “create your first AI film” workshop, and a creator-led masterclass on how AIGC is reshaping commercial creativity, as laid out in the Event announcement.

The agenda is explicitly production-oriented (try the model, build a first film, hear commercial-use case learnings), which makes it more actionable than typical product marketing—especially for student creators and small teams looking for guided end-to-end workflow exposure.


🗣️ Creator reality: harassment, contests, and ‘when are you a creator?’

The discourse itself is the news here: creator mental health/community support, contest link-rot frustrations, and identity/skill framing for new AI creators.

A practical coping frame for AI-creator harassment: find “one person,” then build allies

Online harassment (AI creator community): A long personal post reframes coping as a search problem—first identify a single supporter (“You only need one person”), then deliberately shift attention from “thorns” (harassers) to “roses” (supportive peers), according to the Harassment coping story. The point is community reinforcement: once a supportive group forms, the harassment becomes less central because attention/fuel shifts.

Tactical steps described: look harder for at least one ally; start a conversation; expand into a small support cluster; stay busy collaborating so harassment starves of attention, as described in Harassment coping story.

Contest ops (AI art/film challenges): One organizer reports that while compiling finalists, four entrants deleted their submissions, leaving dead links in an Excel tracking sheet, as described in Dead links frustration. It’s a small but concrete reminder that “social post as submission” is brittle when judging and record-keeping depend on URLs that creators can remove later.

“When do you stop being a newbie?” becomes an identity question for AI creators

Creator identity (AI art/video): A newsletter writer asks for real-world definitions of when a “newbie” becomes a creator—explicitly inviting a wide range of perspectives and noting there’s “no wrong answer,” as asked in the Newbie to creator question. A follow-up message reiterates that responses vary widely and requests more community insight, as stated in Request for more insight.


🏆 Creator releases & awards: festival wins and boundary-pushing shorts

Named projects and recognition: one festival award highlight plus several creator film drops framed as experiments in stacked AI toolchains.

UnHuman Shorts Season II awards Céremony for “Shared Trace” unity of image, sound, story

UnHuman Shorts Season II (festival award): Magiermogul’s AI film “Céremony” won the SHARED TRACE category, with the category definition explicitly emphasizing “total art” where image, sound, and story fuse into one statement, as described in the Win announcement; the same post also lists runners-up and the other category winners (Narrative Consequence, Aftertone, Visual Impact), framing the win as “director’s vision + balance of elements” rather than pure visuals.

The watch link for the film is reiterated in the Watch link post and again in the Watch link repeat, but the tweets don’t include a clip or stills, so the piece’s style and toolchain aren’t evidenced here beyond the festival’s category framing.

Ricardo Villavicencio’s ONE gets teased as a 2026 drop for White Mirror

ONE (Ricardo Villavicencio): A short teaser for “ONE” is being positioned as coming to White Mirror and its platform in 2026, with a second post framing it as an example of “pushing boundaries of AI and storytelling,” as stated in the Teaser and framing.

ONE teaser clip
Video loads on view

The tweets don’t add production details (model stack, runtime, distribution terms), so what’s concrete today is the release window (2026) and the platform association per the Teaser and framing.

DrSadek_ continues a serialized run of vertical shorts with Midjourney and Alibaba Wan 2.2

DrSadek_ (creator drops): A run of titled, atmospheric vertical pieces continues—each credited to Midjourney for images and Alibaba Wan 2.2 animation (via ImagineArt), including “The Ocular Construct,” “GrailFall: The Blood Covenant,” “The Monolith of Pages,” “The Golden Outpouring,” “The Lone Warrior,” and more as shown across the Ocular Construct post, GrailFall post , and Monolith post.

The Ocular Construct clip
Video loads on view

Across the set, the consistent pattern is serialized, title-card framing (drop-by-drop worldbuilding) and repeatable attribution of the same tool stack, with the “Lone Warrior” post explicitly calling out Midjourney plus Nano Banana alongside Wan 2.2 in the Lone Warrior stack credit.


🧯 Tool friction watch: Seedance refusals, audio drift, and moderation blocks

A small but actionable cluster: creators hit refusals/filters and inconsistent conditioning (especially audio) in Seedance 2.0, impacting production reliability.

Seedance 2.0 audio conditioning drifts even with “don’t change the audio”

Seedance 2.0 audio conditioning: Following up on Music-to-video (audio+image → clips), a test suggests Seedance treats audio references as loose conditioning: even when prompted “music video, don’t change the audio at all,” the model rarely returns the exact provided audio and may output a remix instead, according to Audio ref drift test and the RT mirror.

Seedance output with audio drift
Video loads on view

Observed failure modes: The report calls out “rare to get a video output with my exact audio provided” and notes the system “didn’t like my lyrics,” both described in Audio ref drift test.

Creative implication: This makes audio-first workflows (precise soundtrack/lyric timing, music videos, choreography cut-to-beat) less deterministic than image conditioning, at least per the behavior documented in Audio ref drift test.

Seedance 2.0 refusal loop blocks multiple music-video prompt variants

Seedance 2.0 (Dreamina/CapCut ecosystem): A creator reports a repeatable refusal state where Seedance returns “The text you entered does not comply with the platform rules” across multiple prompt variants—ranging from “rap music video” to “Music video” to even generic “Video,” including both 10s and 5s jobs—suggesting a moderation tripwire that’s hard to debug from the UI alone, as shown in Refusal error grid.

Why it matters for production: When the same block persists across softened prompts and shorter durations, it turns prompt iteration into guesswork rather than creative direction, based on the repeated attempts captured in Refusal error grid.

!

While you're reading this, something just shipped.

New models, tools, and workflows drop daily. The creators who win are the ones who know first.

Last week: 47 releases tracked · 12 breaking changes flagged · 3 pricing drops caught

On this page

Executive Summary
Feature Spotlight: Pika “AI Selves”: persistent mini-you for chats, creation, and collaboration
🧬 Pika “AI Selves”: persistent mini-you for chats, creation, and collaboration
Pika launches “AI Selves” with persistent memory and a social persona layer
Pika’s early-access loop: quote-retweet to get an AI Selves code via DM
“My AI Self will comment shortly” becomes a social-proof pattern for persona agents
Early user framing: “gave birth to a digital version of myself” that remembers and evolves
Pika publishes “Top 3 moments since being born” from employees’ AI Selves (Part 1)
Pika continues the AI Selves “Top 3 moments” series with Part 2
🎬 Video models in the wild: Seedance 2.0, Kling 3.0, Grok Imagine, CapCut edits
Seedance 2.0 UGC: generating ad-style testimonials from one product photo
Runway adds an “all the models” catalog in-app, bundling Kling, Sora, GPT-Image and more
Freepik positions Seedance 2.0 as “UGC that converts” and says it’s coming soon
CapCut × Seedance 2.0: Seedance-generated clips show up as an in-editor workflow
Kling 3.0 camera movement demo: fog-level start to gothic building reveal
Seedance 2.0 is getting used as a pose-change fidelity test (sleeping to waking)
Grok Imagine: prompting is the difference between stiff and lively illustration animation
Kling 3.0 eclipse sequence demo lands as a “hard scene” reference shot
Photoreal environment gen: “not Unreal Engine 5,” claimed as AI from scratch
Seedance 2.0 demo: a single prompt drives a clean on-the-spot “Transformer” morph
🧩 Workflows that ship: stacking tools for shorts, ads, and brand systems
Illustrator Partner Models → Firefly (Nano Banana Pro) → Veo 3.1 brand campaign loop
Holloway: Kling 3.0 × Seedance 2.0 × Nano Banana Pro with 2x2 and 3x3 pacing
Midjourney → Kling 3.0 → Suno: a repeatable “aesthetic-to-music video” stack
Traditional 3D + Luma Ray3.14 Modify: using AI as an “intelligent render engine”
Interior design stack: Tripo → FBX → Blender layout → Nano Banana Pro finishing pass
One workflow, multiple models: GPT-5.2 + Claude + Nano Banana + Gemini without copy-paste
Personal short “10 YEARS / NEW YORK”: workflow-first AI animation with sound-led pacing
Flow “screenshot strategy”: building a narrative from one starting frame
🧪 Finishing pass: video/image upscalers and polish layers
Freepik launches Magnific Video Upscaler with 4K outputs and creator controls
Topaz brings Gigapixel upscaling models into Adobe Lightroom
Magnific Video Upscaler settings recipes show how far you can push polish
Magnific Video Upscaler roadmap talk centers on a coming “precision mode”
🧾 Prompts & aesthetics: Midjourney SREFs, design playbooks, and prompt libraries
An open-source mega-repo compiles role prompts and community-tested prompt packs
Nano Banana Pro “3D Neuro Chromatic FX” recipe for product-grade CGI on pure black
Promptsref’s Pop‑Surrealism SREF 680572301 for glossy, high-saturation 3D illustration
A free “50 design prompts” playbook is being marketed as a workflow shortcut
Midjourney SREF 236425153 for chromatic-aberration “glitch” aesthetics
Midjourney SREF 2404345657 for cinematic European-comic storyboards
Midjourney SREF 2635889723 for bright holographic “subjects made of light”
Promptsref’s top SREF report spotlights a “retro hardcore graphic novel” blend
Midjourney SREF blend experiments for high-contrast abstract forms
Promptsref adds search and sorting for browsing prompts and images
🤖 Agents & ‘AI teams’: research war rooms, salary-driven bots, hosted OpenClaw
Spine Swarm pitches a steerable “research war room” with parallel AI researchers
Kilo Claw markets hosted OpenClaw agents through “Will it claw?” stress tests
ClawWork frames “agents that must earn a salary or go bankrupt”
Parallel-agent “vibe coding” uses isolated worktrees to avoid collisions
OpenClaw setup friction shows up as “asking for a bunch of API keys”
📣 AI marketing content engines: UGC factories, viral-pattern generation, ad automation
Clawdbot “ad factory” claim: 900+ AI UGC ads and 20 videos/day in 47 seconds
Seedance 2 UGC pattern: generate testimonial ads from one product photo + a loose prompt
Brazil TikTok Shop case: $135k/month claimed from one reusable “digital face”
Buzzy AI pitches viral-pattern mining that outputs brand-matched video variants
🎵 Music models in practice: Lyria 3 talk + Suno as the glue
Seedance 2.0 treats audio references as loose guidance and may remix them
Suno v5 gets called the most underused storytelling tool for AI video
Lyria 3 team conversation video adds a practical look inside the model
🧱 3D + motion pipelines: 2D→3D prototypes, AI-as-render-engine, mocap-adjacent
Loopit opens access for a draw→extrude→animate 2D-to-3D prototype flow
Autodesk Flow Studio shares a Beast Games S2 live-action→CG pipeline walkthrough
Luma Ray3.14 Modify gets used as an AI-assisted “intelligent render engine” in 3D
Tripo→FBX→Blender scene layout workflow used to “lock” spatial consistency before polish
Seedance 2.0 “car transforms into mecha” clip shared as a mechanical-motion reference
🪪 Consistency & repeatable “faces”: Soul ID, Reference Mode, and reusable creators
Higgsfield launches SOUL 2.0 with Soul ID and Reference Mode for consistent fashion looks
Spatial consistency for interior design: build the room in 3D, then polish the render
🏗️ Where creators work: ‘all-model’ studios, editors, and prompt sites
Runway bundles multiple third-party gen models into one studio UI
Reve moves editing tools into a single right-side panel
Promptsref ships search and sorting for prompt/image discovery
Promptsref’s “most popular SREF” post turns a style code into a mini art-direction brief
🧠 Local AI building blocks: llama.cpp/ggml moves into Hugging Face
ggml.ai and llama.cpp move into Hugging Face
📄 Research radar (creative-relevant): faster video diffusion + unified latents + risk frameworks
SpargeAttention2 targets 16.2× faster video diffusion with 95% sparse attention
Frontier AI Risk Management Framework v1.5 gets recirculated as a reference doc
Unified Latents (UL) proposes diffusion-regularized encoders and diffusion decoding
fal.ai publishes a “State of Generative Media” report for the past year’s stack shifts
⚖️ Rights, lawsuits, and legitimacy: who owns the face/voice in AI media
Disney vs Seedance 2.0 chatter frames the next big IP lawsuit wave for video models
McConaughey predicts AI performances will hit Oscars and calls for likeness contracts
“Nobody elected these AI ethicists” pushback resurfaces as a creator legitimacy fight
📅 Creator events & workshops: Hailuo x HKU AI Society Open Day
Hailuo and HKU AI Society host a 2-day AI film open day in Hong Kong (Feb 26–27)
🗣️ Creator reality: harassment, contests, and ‘when are you a creator?’
A practical coping frame for AI-creator harassment: find “one person,” then build allies
AI contest admin pain: finalists deleting posts creates dead links mid-judging
“When do you stop being a newbie?” becomes an identity question for AI creators
🏆 Creator releases & awards: festival wins and boundary-pushing shorts
UnHuman Shorts Season II awards Céremony for “Shared Trace” unity of image, sound, story
Ricardo Villavicencio’s ONE gets teased as a 2026 drop for White Mirror
DrSadek_ continues a serialized run of vertical shorts with Midjourney and Alibaba Wan 2.2
🧯 Tool friction watch: Seedance refusals, audio drift, and moderation blocks
Seedance 2.0 audio conditioning drifts even with “don’t change the audio”
Seedance 2.0 refusal loop blocks multiple music-video prompt variants