daVinci‑MagiHuman claims 2s for 5s@256p on H100 – single-stream audio+video

Stay in the loop

Free daily newsletter & Telegram daily report

Join Telegram Channel

Executive Summary

GAIR/SII‑GAIR posted daVinci‑MagiHuman (paper + model) as a “Speed by Simplicity” digital-human stack; it uses a single-stream transformer to generate audio and video together (skipping multi-stream cross-attention); the model card claims ~2s to generate 5s @ 256p on a single H100 and ~38s for 1080p, with multilingual support; comparisons are positioned as SOTA-adjacent, but today’s circulation is model-card-level with no third-party reproduction artifact.

OpenAI/Sora: OpenAI circulates a notice “saying goodbye” to the standalone Sora app; creators also claim Sora API access is ending, with video generation expected to migrate under ChatGPT; timing and exact replacement UX remain unclear.
Alibaba DAMO/AgentScope: AgentScope open-sources a Python multi-agent framework (visual Studio; MCP tools; memory; RAG) under Apache‑2.0; “production-ready” framing, but no independent reliability numbers in-thread.
Hugging Face/hf-mount: hf-mount exposes Hub buckets/models/datasets as a local filesystem; buckets are read-write, models/datasets read-only; marketed as “agentic storage” with “100× bigger” remote capacity.

Across the feed, the center of gravity shifts from standalone creative apps toward shared surfaces (ChatGPT, agent studios, mounted storage); speed claims and workflow consolidation are loud, while verification and portability standards are still fragmenting.

!

While you're reading this, something just shipped.

New models, tools, and workflows drop daily. The creators who win are the ones who know first.

Last week: 47 releases tracked · 12 breaking changes flagged · 3 pricing drops caught

Top links today

Feature Spotlight

Sora app shuts down (and what that means for creators)

OpenAI is shutting down the Sora app (and reportedly its API), disrupting creator remix culture and signaling consolidation of AI video workflows into larger hubs like ChatGPT.

High-volume story today: OpenAI is discontinuing the standalone Sora video app (and reportedly the API), forcing creators to rethink capture→remix workflows and where “AI video” lives (standalone apps vs inside ChatGPT).

Jump to Sora app shuts down (and what that means for creators) topics

Table of Contents

🧭 Sora app shuts down (and what that means for creators)

High-volume story today: OpenAI is discontinuing the standalone Sora video app (and reportedly the API), forcing creators to rethink capture→remix workflows and where “AI video” lives (standalone apps vs inside ChatGPT).

OpenAI sunsets the standalone Sora app, with creators flagging API fallout and ChatGPT migration

Sora (OpenAI): OpenAI is “saying goodbye to the Sora app,” per the circulated shutdown notice in shutdown notice, with mainstream confirmation coming via the Hollywood trade report linked in trade press report; creators also claim the Sora API is being shut down too, as stated in API goodbye claim, while at least one creator summary frames the shift as video generation moving under ChatGPT instead of a separate app in Turkish workflow note.

What changes for workflows: The practical impact is losing a dedicated Sora surface (and potentially programmatic access) that many creator pipelines were built around, as highlighted by the “good bye to the API” reaction in API goodbye claim.
Business context (reported): The Hollywood Reporter piece also mentions an alleged Disney deal reversal, as summarized in trade press report, though the tweet set here doesn’t include first-party OpenAI detail beyond the shutdown notice in shutdown notice.

Unclear from these tweets: exact shutdown timing, what (if anything) replaces Sora’s app UX inside ChatGPT, and whether API access is truly ending versus being rebranded/migrated.

Creators mourn Sora’s in-feed remixing culture as a low-friction onramp

Sora remix workflow: A recurring creator takeaway is that Sora’s “remixing culture” lowered the barrier to making good outputs because people could riff off a feed item instead of writing long prompts, as described in remix culture note. That loss lands as more than nostalgia—it’s a specific UI mechanic that helped new creators iterate quickly.

Why it mattered: The claim is that “scroll a feed → riff” made onboarding faster than prompt authoring, per remix culture note.
Where it might reappear: Some creators point to other products adopting remix-like social creation loops, with a hint that Seedance has remix capability in China in Seedance remix mention.

Net: this frames “remix” as a product feature AI video tools can copy, not a model capability.

Calls grow for OpenAI to open-source Sora as the app shuts down

Open-sourcing Sora: With the shutdown news spreading, some in the open-source ecosystem argue OpenAI should release Sora code/weights as a parting contribution, as stated in open-source suggestion. The pitch is framed as a “zero to hero” reputational move by another creator in zero to hero framing.

This is advocacy, not an announced plan; the tweet set contains no indication OpenAI is considering open-sourcing Sora beyond the community request in open-source suggestion.

Concern: automated short-form accounts built on Sora may break when the app goes away

Downstream creator ops: One reaction highlights that “automated AI tiktok accounts” and “fruit content” pipelines are often built on Sora, raising questions about breakage and platform switching when Sora disappears, as noted in automation pipeline worry. Another creator repeats the “Sora is no more” framing and flags API loss as an extra dependency risk in API dependency fear.

This is a second-order impact story: the tweets don’t quantify how many accounts or which tooling stacks depend on Sora, but they do show real operational anxiety in automation pipeline worry.

Sora shutdown gets read as a cost signal: AI video may consolidate to a few providers

AI video economics: One creator frames the Sora shutdown as evidence that AI video apps can be “too expensive with too little benefit” and predicts only a few players will remain, as argued in cost consolidation take. That interpretation is echoed in smaller reactions mourning Sora’s exit in pouring one out.

Treat this as market sentiment rather than proof: the tweets don’t include cost curves, usage numbers, or official margin commentary—only the strategic inference in cost consolidation take.

“Last day” sentiment: requests to relax Sora restrictions briefly before shutdown

Sora access policy: A small but clear thread of creator sentiment asks OpenAI to temporarily loosen Sora restrictions before discontinuation—“remove restrictions … for a day”—as stated in relax restrictions request. A related nostalgia post references “the first few days of uncensored sora,” in uncensored era reference, reinforcing that some creators experienced a perceived tightening over time.

There’s no indication in this tweet set that OpenAI plans any last-chance policy change; it’s purely a user request in relax restrictions request.


🎬 Seedance vs Kling era: realism, style signatures, and team workflows

Video creators focused on model choice and “feel” (Seedance 2.0 vs Kling 3.0), plus team/production plan features and prompt specificity for comedic/realistic shorts. Excludes Sora shutdown (covered in the feature).

Kling 3.0 prompt for “filming a Zoom call on a laptop screen” realism + gag beat

Kling 3.0 prompting: A detailed “realistic handheld footage of a MacBook Pro screen” prompt is being shared as a way to get believable screen artifacts (reflections, moiré pixel texture, dust, slight handheld shake) while staging a non-explicit comedy beat inside the Zoom window, per the [prompt writeup](t:267|Zoom screen prompt).

Zoom-screen comedy output
Video loads on view

The interesting craft choice is the “nested frame” approach—treating the laptop screen as the hero object, then directing the actor’s performance within the UI frame, as described in the [shot requirements](t:267|Screen realism requirements).

Seedance 2.0’s “signature look” becomes a selection factor vs Kling 3.0 stacks

Seedance 2.0 vs Kling 3.0: A creator claims Seedance 2.0 outputs are becoming instantly recognizable due to a recurring aesthetic, while Kling 3.0 combined with Nano Banana is described as lacking that tell—keeping it “irreplaceable” for projects that need a more neutral look, per the [style signature take](t:35|Style signature take).

Seedance vs Kling look demo
Video loads on view

This is less about raw quality and more about “do clients notice it’s that model?”—a practical concern once you’re shipping multiple spots/episodes in a consistent visual language.

Timecoded shotlists + continuity rules are becoming the anti-drift prompt format

Video prompting craft: Two separate prompt shares converge on the same structure—(1) timecoded beats, (2) explicit camera language, and (3) a “must-not-change” continuity section—to prevent typical AI video failure modes like prop duplication and unreadable blocking, as shown in the [donut shotlist prompt](t:96|Timecoded donut prompt) and the [Zoom-screen realism prompt](t:267|Zoom realism prompt).

The point is the prompt is acting like a mini call sheet: it pins down what must stay stable (one donut; one Zoom participant; fixed framing) while leaving motion/performance to the model.

Topview Agent V2 + Seedance 2.0 adds long-form continuity and continuous music claims

Topview Agent V2 + Seedance 2.0 (Topview): Following up on Topview Agent V2—multi-scene generation + timeline editing—new creator testing claims multi-scene cuts that “hold together” and music that merges continuously between scenes, alongside the note that Seedance 2.0 is “unlimited” for the next 365 days on a Business Annual plan, per the [testing notes](t:323|Multi-scene + music claim).

Multi-scene workflow demo
Video loads on view

This is a workflow claim (not a model claim): the pitch is “one workflow from a single idea” instead of stitching 15-second clips across tools, as described in the [long-form framing](t:323|Long-form film framing).

Higgsfield pushes a $1,000,000 likeness deal as “AI film is already here” proof

Higgsfield (Higgsfield AI): A thread claims $1,000,000 was paid to one person for their likeness, framing it as a new cost model for AI-generated film (“no camera. no script. no union disputes.”) in the [likeness breakdown post](t:116|Likeness economics claim).

Likeness economics montage
Video loads on view

This is a business + ethics signal for video creators: the market is starting to price “face/identity” as a licensable asset, not just labor time, per the [breakdown framing](t:116|Cost model framing).

Kling AI Team Plan highlights 15-person collaboration and shared asset management

Kling AI (Kling): Kling is pushing a Team Plan workflow built around collaboration—supporting up to 15 members, a shared workspace for assets, and explicit “worry-free” commercial use positioning, shown in the [team plan walkthrough](t:80|Team plan walkthrough).

Team plan UI tour
Video loads on view

Production impact: The pitch is less “better generations” and more fewer handoffs—one place to manage prompts/assets across a small studio team, as described in the [plan overview](t:80|Team plan overview).

Simple prompts are a Kling 3.0 strength (example: combat robot street scan)

Kling 3.0 (Kling): A creator reports that short, explicit prompts work especially well in Kling 3.0, sharing a concrete example prompt about a “futuristic city street… enormous combat robot… glowing optics scan,” per the [prompt example](t:149|Simple Kling prompt).

Combat robot street clip
Video loads on view

The practical read: Kling 3.0 seems to reward clear subject + action + environment more than elaborate prose—useful when you want fast iterations and predictable blocking.

Hailuo Light Studio pushes web-only relighting for smoother video transitions

Light Studio (Hailuo AI): Hailuo is promoting a web-only Light Studio flow aimed at smooth transitions plus natural relighting, with the product entry linked in the [tool post](t:135|Light Studio post) via the [web tool page](link:135:0|Relight tool page).

There’s no concrete spec sheet in the tweets; the core creative promise is “polish what you already have” by re-lighting shots so cuts feel less stitched together, as implied by the [relighting positioning](t:135|Relight positioning).

Seedance 2.0 is being marketed by creators as “Arcane-like” cinematic animation

Seedance 2.0: A creator frames Seedance 2.0 output as “straight out of an Arcane episode,” using the clip as evidence and teasing a prompt share for subscribers in the [Arcane comparison post](t:99|Arcane comparison post).

Cinematic Seedance clip
Video loads on view

Treat it as an adoption signal: more creators are selling Seedance on high-end series look rather than “AI video novelty,” as implied by the [positioning language](t:99|Arcane vibe claim).

Seedance 2.0 is being used as a style-adapter with reference images

Seedance 2.0: A creator highlights Seedance 2.0’s ability to adapt across aesthetics (anime, 2D, 3D), showing a workflow where a Midjourney-made reference style is fed in to steer the final animation, per the [reference-driven example](t:58|Reference style example).

Reference-to-animation result
Video loads on view

The operational takeaway is that Seedance is being used less as “text-only video gen” and more as a style-preserving animator when you already have a look locked in upstream, as described in the [process note](t:58|Style adaptation claim).


🖼️ Image model heat check: Midjourney v8, Uni‑1 usage, and photoreal leaps

Creators are benchmarking new image quality (especially Midjourney v8) and sharing production uses of Uni‑1 (material maps, consistent assets). Excludes raw prompt dumps and SREF codes (handled in Prompts & Style Drops).

Midjourney v8 is becoming the default for fashion-editorial stills

Midjourney v8 (Midjourney): Creators are posting v8 outputs specifically as fashion/fine‑art “editorial frames,” with one calling it the “best… I’ve ever used” in a v8 showcase post that’s essentially a quality claim plus examples in the wild Fashion fine art praise.

What people are actually shipping: Dark, high-contrast, magazine-ish sets (skin, fabric, and lighting doing most of the work), echoed by more “I’m in love with my MJv8 work so far” dumps that look like shoot selects rather than prompt tests More v8 stills.
Style range signal: Others are using v8 for painterly, location-inspired studies (less “fashion,” more atmosphere), like the Melaka-inspired impressions shared as v8 experiments V8 Melaka experiments.

A new, unnamed image model is being teased as a photoreal jump

Image realism (unspecified model): One creator shared a small “classic photo riff” set and called it the “most realistic AI photos I’ve ever seen,” adding that broader access is coming soon—strong hype signal, but without a named model or eval artifact yet Photoreal teaser.

The only hard evidence in the tweets is the sample pair itself (black‑and‑white ‘vintage computer’ vibe and a library portrait), so treat the claim as promotional until the model name and distribution surface are clear Photoreal teaser.

Uni-1 is being used to generate usable material maps, not just pretty images

Uni-1 (Luma Labs): A concrete production use is showing up: generating consistent PBR/base color + normal + displacement map sets with Uni‑1, then combining them with triplanar/image projections to speed up asset lookdev while keeping quality, as described in a short workflow demo Material maps workflow.

PBR maps generation demo
Video loads on view

Why it matters for 3D artists: The post frames Uni‑1 less as “image gen” and more as a material-authoring accelerator (maps that stay coherent together), which is usually the part that breaks when you batch-generate textures Material maps workflow.

Canva demonstrates one-click layer separation for fast compositing

Canva (Canva): A demo shows Canva separating a photo into editable layers with a single action (positioned as “reinventing Photoshop from first principles”), turning background removal into a multi-layer edit workflow rather than a single cutout Layer separation demo.

One-click layers demo
Video loads on view

This is the kind of feature that matters for ad and thumbnail teams because it collapses the “extract subject → recompose → retouch” loop into one tool surface when you’re iterating fast Layer separation demo.

ComfyUI’s new “APP” mode is pitched as lowering the node barrier

ComfyUI (local image gen): A Turkish creator says ComfyUI added a simple APP feature aimed at people who avoided it because of the node system, pairing it with a fast local workflow they call “Z‑Image” for unlimited/free image generation setups ComfyUI APP mention.

The post is positioned as a usability shift (less node-graph intimidation) rather than a new model; details live in the linked YouTube walkthrough referenced in-thread YouTube walkthrough link.

Luma clarifies model routing in Agents and how to guarantee Uni‑1 outputs

Luma Agents (Luma Labs): Luma says agent requests may route across models, and spells out how to ensure you’re actually running Uni‑1—either select Create Image → Uni‑1, or explicitly instruct the agent to use Uni‑1, then verify the model label on the output Routing clarification.

Uni-1 selection steps
Video loads on view

Testing surface signal: The same note teases API access coming soon for more direct testing, which implies current creator testing is mostly through the Agents UI rather than a stable API harness Routing clarification.

Creators are using “GPT Image 1.5” inside Firefly for cinematic product-diorama shots

GPT Image 1.5 in Firefly (Adobe): A creator shared an example labeled “GPT Image 1.5 in Adobe Firefly,” showing a high-end, moody product-photo look where the subject becomes a mini-diorama (deep-sea diving helmet viewport as an underwater scene) Firefly example post.

This is a capability signal more than a release note—no official Adobe/OpenAI announcement appears in the tweets, but the output style being posted is “macro product photo + contained world” rather than generic portraits Firefly example post.

Photoshop beta ships Rotate Object for 2D rotation workflows

Photoshop (Adobe): A creator thread highlights the release of Rotate Object in Photoshop (beta), pitching it as a way to rotate 2D images and then use Harmonize to re-match lighting so the edit sits naturally in the scene Rotate Object beta.

The tweet doesn’t include a UI clip or changelog screenshot, but it’s being framed as a practical “pose/perspective correction” step before doing lighting integration Rotate Object beta.

Firefly Boards is being used as a cross-model ideation workspace

Firefly Boards (Adobe): A creator calls Adobe Firefly Boards their go-to for ideation specifically because it lets them explore multiple AI models “in one place,” which reads like a moodboard-first workflow rather than committing early to a single generator Boards workflow mention.

No concrete settings or templates are shared in the tweet itself, but the “multi-model in one surface” framing is the main adoption signal Boards workflow mention.

Open-source identity jokes are bleeding into creator timelines

Open-source culture (creator discourse): A viral post joking “Meek Mill? The open source contributor?” signals how “open source contributor” has become a mainstream status marker that’s now meme material, not just dev Twitter shorthand Open source meme.

It’s not a tool update, but it’s a reminder that open-source identity is part of the creator brand layer right now—especially as more creative tooling and models ship in open repos Open source meme.


🧪 Post tools that save shots: relighting, layering, and polish

A cluster of finishing tools appeared today—especially browser-based relighting and faster compositing/layer workflows that reduce re-generation loops for creators.

Freepik Relight brings controllable lighting (and reference lighting transfer) to images + video

Relight (Freepik): Freepik shipped Relight as an in-browser finishing tool for both images and video—explicitly letting you control light direction, intensity, and color, and also transfer lighting from a reference image so you can match a look across shots without regenerating everything, as described in the launch post.

Relight control demo
Video loads on view

What it’s for: fast continuity fixes—e.g., keeping a character/product lit consistently across a sequence by using a single “lighting reference” still, per the launch post.
How it’s packaged: starts from presets and then lets you dial in changes; Freepik positions it as “a whole lighting studio in your browser,” as shown in the launch post and the tool page.

This is a straight attempt to turn lighting into an editable layer, instead of a reroll loop.

Canva shows one-click “separate into layers” for fast compositing

Layers separation (Canva): Canva is demoing a “reinventing Photoshop” move where you can one-click separate an image into layers—effectively making subject extraction + recomposition feel like an editable stack rather than a destructive edit, as shown in the layers demo.

Layer separation demo
Video loads on view

For designers, this is the kind of feature that turns background removal from a single action into a compositing workflow (swap backgrounds, re-order elements, iterate layouts) without re-generating the base image.

Photoshop (beta) adds Rotate Object to fix pose/perspective without re-gen

Rotate Object (Photoshop beta): A new Photoshop beta capability called Rotate Object is being shared as a way to rotate 2D images, then optionally use Harmonize to relight/integrate the edit, per the feature mention.

The practical creative implication is fewer “start over” loops when a generated element is almost right but sitting at the wrong angle.

Uni-1 is being used to generate PBR map sets (normal/displacement) for faster lookdev

Uni-1 (Luma Labs AI): A 3D asset workflow is circulating where Uni-1 generates consistent PBR/BaseColor + Normal + Displacement maps, then creators combine those with triplanar/image projection to speed up lookdev while keeping material quality, as shown in the workflow demo.

PBR maps on sphere
Video loads on view

This is “post” in the sense that the model output becomes a reusable material package—so you can change lighting/camera/mesh context without redoing texture work.

A “frosted glass with a clear slit” prompt is spreading as a reusable post-look

Frosted glass look (prompt technique): A copy-paste prompt is being used as a post-style filter to make any image look like it’s behind thick fogged/frosted glass—while leaving a small, sharp clear region (a wipe in condensation) to reveal one detail, as laid out in the prompt share.

The prompt’s key constraints are “physically real frosted texture” (micro ripples, refraction, scattering) plus a deliberately tiny untouched window, per the prompt share.

Hailuo Light Studio spotlights relighting and shadow control as a polish step

Light Studio (Hailuo AI): Hailuo is pushing its web-only Light Studio as a post step for “smooth transitions” plus “natural relighting,” according to the tool post, while separately framing it as direct shadow control (“shadows move on your command”) in the shadow-control post.

Positioning for creators: it’s marketed as production-style lighting adjustment after you already have motion/frames, with both posts pointing to the same web surface via the relight tool page.

No feature list beyond relight/transition claims showed up in the tweets, so treat it as a workflow surface tease rather than a spec drop.

Character-swap quick cuts are becoming a lightweight motion-consistency check

Character-swap editing pattern: Short “swap” clips that cut between two subjects doing the same moves (timed to match) are being shared as a simple way to show how well motion, pose timing, and rhythm carry across different characters, as seen in the swap clip.

Video loads on view

It’s not a model feature by itself, but it’s a compact benchmark format creators keep reusing: the edit makes drift obvious within seconds.


🧾 Prompts & SREFs creators are actually saving (Midjourney + Nano Banana)

Heavy prompt traffic today: Midjourney SREF shares, Nano Banana structured prompt schemas, and copy-paste look recipes (blueprint/brutalism, Moebius sci‑fi, mixed-media posters, and optical effects).

Copy-paste prompt for frosted glass with one crystal-clear window

Nano Banana (Image effect prompt): A fully copy-paste prompt was shared for turning an uploaded image into “thick foggy frosted glass,” including micro-ripples, refractive warping, light scattering, and a single small wiped-clear region (a narrow slit/window) that stays sharp, as specified in the [frosted-glass prompt](t:194|Frosted-glass prompt).

Key constraint: It explicitly warns against a simple blur filter—demanding physically believable condensation texture and refraction, per the [same prompt](t:194|Frosted-glass prompt).

Macro product-photo schema: miniature weather inside antique mechanical objects

Firefly + Nano Banana 2 (Prompt schema): A modular prompt template describes an “open antique mechanical object” on a dark wooden table, where the object’s face contains a complete miniature weather event—paired with macro-photo language (85mm, shallow DoF, moody background) and internal light casting through transparent surfaces, as shared in the [prompt template](t:158|Mini weather schema).

Why creators save this one: It’s built as a slot-fill schema ([MECHANICAL OBJECT], [WEATHER EVENT], [PRECIPITATION], etc.), which makes it easy to generate consistent series (compass hurricane, music box snowfall, and so on), as shown in the [examples](t:158|Mini weather schema).

Spec-sheet prompting is spreading: explicit constraints to fight drift

Prompting pattern (Spec sheets): Multiple posts show the same core move—write prompts as constraint documents (must-have lists, explicit “remove elements” rules, and strong negative prompts) instead of vibes-only text, as demonstrated by the [Zoom screen JSON spec](t:372|Zoom screen spec) and the checklist-heavy [frosted-glass effect prompt](t:194|Frosted-glass prompt).

What it’s targeting: Reducing common model failure modes like UI clutter, extra objects, and inconsistent “physical” artifacts (moire/refraction), per the rules embedded in the [two examples](t:372|Zoom screen spec).

Black-and-white xerox zine prompt for punk fashion editorials

Nano Banana (Prompt template): A “brand × world-building chaos” prompt is being passed around for black-and-white xerox zine fashion—extreme grain, halftone dots, ink bleed, photocopier distortion, torn paper edges, heavy shadows, and a stencil-stamped brand logo—spelled out in the [copy-paste prompt](t:415|Xerox zine prompt).

Notable constraint: It hard-requires “No color. No clean lines,” while still aiming to keep the output readable as fashion, per the [same text](t:415|Xerox zine prompt).

Mixed-media portrait template: realism plus sketch, newsprint, and acrylic splatter

Nano Banana 2 (Prompt template): A reusable mixed-media portrait recipe is being shared with a clear variable structure—[SUBJECT], [EXPRESSION], [GAZE DIRECTION], [ACCESSORY/STYLING], and color slots—layering black sketch strokes, torn newspaper collage, acrylic paint splashes, and editorial print texture, as written in the [template prompt](t:71|Mixed-media template).

What it’s optimizing for: High-contrast “premium poster” outputs with ultra-detailed eyes/face while still reading tactile/handmade, per the [same template](t:71|Mixed-media template).

Midjourney SREF 1326912768: neon retro-wave cyberpunk manga vibe (Niji 6)

Midjourney (SREF): A “lost cyberpunk anime” style share pinned a daily recipe using --sref 1326912768 --niji 6, describing heavy pink/purple palettes, detailed linework, and 80s/90s retro-wave animation cues, per the [style description](t:173|Style description) with more context on the linked [SREF guide](link:400:0|SREF guide).

Prompt additions mentioned: The post suggests pushing it further with neon lighting effects and liquid/dripping elements, per the [same share](t:173|Style description).

Midjourney SREF 3368833261: soft-focus fashion-film haze and golden backlight

Midjourney (SREF): Promptsref highlighted --sref 3368833261 as a repeatable “dreamy fashion film frame” look—soft-focus cinematic haze, warm golden backlight, and film-grain texture cues—described in the [SREF feature note](t:145|SREF feature note) and expanded in the linked [style breakdown page](link:378:0|SREF breakdown).

Use-case framing: The post positions it for luxury/editorial portraits (perfume, wedding visuals, art film posters), per the [same description](t:145|SREF feature note).

Midjourney SREF 3782619: gothic-cyberpunk ink portraits with fractured geometry

Midjourney (SREF): Another saved code is --sref 3782619 --niji 6, described as a collision of gothic darkness, cyberpunk tension, and raw ink emotion—red/black/blue contrast, abstract ink textures, and fragmented geometry—per the [style writeup](t:192|Style writeup) and the linked [breakdown page](link:386:0|Breakdown page).

Best-fit outputs named: Dark fantasy characters, sci‑fi book covers, and avant‑garde album art, per the [same post](t:192|Style writeup).

Nano Banana 2: 9:16 aspect ratio tip for better vertical portraits

Nano Banana 2 (Prompting practice): One creator reported better results using 9:16 rather than 2:3 for certain portrait/selfie-style compositions, calling it out directly in the [aspect ratio note](t:65|Aspect ratio note) alongside a structured prompt example and a reference to a curated prompt collection in the [prompt library link](link:65:0|Prompt library).

What’s new here: Aspect ratio is being treated as a first-class “quality lever,” not just a crop decision, per the [same post](t:65|Aspect ratio note).

Prompt sensitivity: swapping one object noun caused a 0.97-point score swing

Prompt testing (Nano Banana Pro): A creator running multi-model scoring reported a large sensitivity to a single variable—keeping the same prompt “storm” but changing the container object yielded a 0.97-point swing, with the best single image scoring 9.23, per the [prompt test note](t:285|Score swing note).

Underlying takeaway in the data: The “main noun” (container/subject) can dominate output quality even when lighting/atmosphere descriptors stay constant, per the [same post](t:285|Score swing note).


🧠 Agent builders’ toolkit: Claude Code, AgentScope, GitAgent, and “done-not-demo” agents

Builders discussed agent frameworks and packaging standards (AgentScope, GitAgent), plus Claude Code’s permission/autonomy changes. This is the ‘agent plumbing’ creators use to automate production tasks.

AgentScope (Alibaba DAMO) open-sources a production-ready agent framework

AgentScope (Alibaba DAMO Academy): Alibaba’s DAMO team (the group behind Qwen) is being shared as releasing AgentScope, positioned as a production-ready Python framework for building multi-agent systems with visual design, MCP tool support, memory, RAG, and “reasoning modules,” with the repo linked in the Framework overview and GitHub repo.

What creatives might actually notice: the diagram and doc snapshot show an opinionated stack—Studio (visual builder), Runtime (sandbox/deploy/A2A), and a long list of integrations (datastores + observability) that suggest the project is trying to be “agent platform” rather than a thin wrapper, as seen in the Framework overview.

The tweet frames this as Apache-2.0 licensed and already being plugged into real pipelines, but there’s no independent validation signal in today’s set beyond the documentation screenshot and repo link.

Claude Code adds auto mode to reduce approval friction

Claude Code (Anthropic): Claude Code introduced an auto mode that sits between “approve every file write/bash command” and “skip permissions entirely,” aiming to cut interaction overhead while keeping some guardrails, as described in the Auto mode mention.

What’s not clear from today’s tweets is exactly how auto mode decides what to run vs. what to ask approval for (rules, thresholds, or scope), so treat it as a permissions UX change rather than a new capability claim until Anthropic publishes the full behavior.

Accio Work pushes local-first desktop agents with sandbox + approvals

Accio Work: A comparative thread frames many agents as “expensive interns” that look good in demos but require constant supervision, then claims Accio Work differs by running actions in a secure sandbox and requiring user approval for bigger actions, according to the Expensive intern critique and the Accio sandbox demo.

Agent runs in sandbox
Video loads on view

Positioning detail: the author explicitly contrasts it with OpenClaw, Claude Cowork, and Perplexity Computer on “can it actually complete multi-step ops,” as stated in the Expensive intern critique.
Commercial packaging: the same thread promotes a “2-week free trial” and premium-model access claims (including Opus 4.6 and GPT-5) in the Trial and model claims, with the product landing page linked as the Product page.

The key creative relevance is the emphasis on execution reliability (doing tasks) rather than “research assistant” behavior, but today’s evidence is primarily the author’s report and a short demo clip.

GitAgent pitches a universal “Docker for agents” folder spec + adapters

GitAgent: A thread argues that the “big unsolved problem” for agents is portability—your prompts/tools/memory end up trapped inside one framework—then proposes GitAgent as a repo-native standard for packaging agents across ecosystems, per the Portability pitch and the “Docker analogy” framing in the Docker analogy.

Folder spec walkthrough
Video loads on view

Folder spec (concrete): the proposed structure includes agent.yaml, SOUL.md, RULES.md, plus skills/, tools/ (MCP-compatible), and memory/, as laid out in the Folder spec.
Adapters and CLI: the same thread claims exporters/adapters targeting Claude Code, OpenAI Agents SDK, CrewAI, LangChain, GitHub Actions, and a universal system-prompt format, with example CLI commands shown in the Adapters and CLI.

The author also cites “1100+ stars and growing” alongside compliance positioning; the most direct “home base” reference is the Project site.

Hugging Face ships hf-mount to mount buckets/models as a local filesystem

hf-mount (Hugging Face): Hugging Face introduced hf-mount, a tool to attach storage buckets, models, or datasets from the Hub as a local filesystem, with read-write for buckets and read-only for models/datasets; the pitch includes remote storage “100× bigger than your local disk” and calls out “perfect for agentic storage,” per the Mount buckets locally and the Intro thread.

For creative agent builders, this is being framed as a straightforward way to give agents a persistent, file-shaped memory layer without stuffing everything into a vector DB abstraction—though today’s tweets don’t include performance numbers or OS/runtime constraints.

Linear says “issue tracking is dead” and leans into agent-driven workflows

Linear (Linear): Linear is publicly pushing the line “issue tracking is dead” while describing a next system where agents reduce process overhead and turn richer context (feedback, decisions, code, specs) into execution, per the Manifesto teaser and the accompanying Product essay.

Issue tracking is dead
Video loads on view

The essay claims agent usage is already widespread (“over 75% of enterprise workspaces” using agents) and frames the shift as moving from handoffs/process to context-driven work orchestration, as described in the Product essay.

Zhipu AI releases ZClawBench for real-world agent evaluation

ZClawBench (Zhipu AI): Zhipu AI is said to have released ZClawBench on Hugging Face as a “realistic benchmark” for evaluating agents on real-world, OpenClaw-style tasks, per the Benchmark announcement.

There aren’t details in today’s tweets about task taxonomy, scoring, or baseline models, but it’s a clear signal that “agent evaluation on realistic workflows” is getting more formalized beyond demo videos.

Agentation gets a “use it all the time” endorsement

Agentation: A short endorsement calls Agentation “sick” and says it’s used “all the time,” which is a small but direct adoption signal in a noisy agent-tool market, as stated in the Creator endorsement.

No workflow details or screenshots are included in today’s tweet, so it reads mainly as momentum/word-of-mouth rather than a spec change.


🧩 Full-stack creator workflows (one canvas, consistent characters, music video stacks)

Practical multi-step pipelines dominated: agents coordinating multi-scene consistency, Uni‑1→video chains, and Freepik Spaces workflows combining lipsync + animation + model mixing. Excludes framework-level agent releases (covered in Coding/Agent Tooling).

Luma Agents push “one canvas” filmmaking with consistent characters across scenes

Luma Agents (Luma Labs): Luma is pitching Agents as a unified creative surface—“one canvas, one conversation”—meant to keep multi-scene work coherent instead of bouncing between prompt tabs and editors, as shown in the [workflow demo](t:199|workflow demo). One creator reports generating a new character (platinum hair, freckles, tattoos) and getting consistent face continuity across scenes “no re-prompting, no tweaking,” per the [consistency note](t:318|consistency note).

Unified workflow demo
Video loads on view

Model routing control: Luma notes requests can route across models inside Agents, and it provides a concrete “force Uni-1” path (Create Image → Uni-1, or explicitly ask) plus an “API access coming soon” tease for direct testing, according to the [Uni-1 selection clip](t:63|Uni-1 selection clip) and the linked product page in Product page.

Freepik Spaces music-video stack leans on lipsync plus consistent characters

Freepik Spaces workflow: Following up on Music-video stack (multi-tool music-video pipeline), a creator shares a ~58s example and says the Space combines lipsync with animated shots while keeping “the vibe and the characters consistent all along the clip,” per the [music video walkthrough](t:82|music video walkthrough). They also claim they’re sharing the full Space and prompts (positioned as more complete than many paid courses), as stated in the [prompts access post](t:276|prompts access post).

Music video made in Freepik
Video loads on view

A trailer pipeline uses Uni-1 keyframes to turn a 3D concept into a cut

“Lacrimosa” trailer workflow (DreamLabLA): DreamLabLA spotlights a trailer that started from a 3D concept and was pushed forward with Uni-1-generated keyframes, per the [trailer post](t:150|trailer post). In parallel, their other Uni-1 share shows the same “generate clean intermediate artifacts, then build downstream” mindset—here via material maps—per the [PBR workflow example](t:50|PBR workflow example).

Trailer made from Uni-1 keyframes
Video loads on view

Creators are starting to avoid Seedance’s telltale look by stacking Kling + Nano Banana

Model “signature look” (Seedance 2.0 vs Kling 3.0): One creator says repeated exposure makes Seedance 2.0 outputs feel increasingly recognizable due to a recurring aesthetic, while claiming a Kling 3.0 + Nano Banana combo avoids that “signature” and remains harder to fingerprint, per the [comparison note](t:35|comparison note). Separate Seedance shares keep emphasizing it can hit many aesthetics, which makes the “recognizable look” debate more about defaults and grading than raw capability, as shown in the [style-adaptation example](t:58|style-adaptation example).

Seedance clip example
Video loads on view

Uni-1 is being used to generate PBR map sets for faster 3D asset lookdev

Uni-1 (Luma Labs): A shared workflow shows Uni-1 generating consistent base color / normal / displacement-style map outputs, then combining them with triplanar/image projections to speed up asset creation while holding quality, as described in the [PBR maps demo](t:50|PBR maps demo). Luma’s own note that Agents can route across models—and that you may need to explicitly select Uni-1—adds context on why some teams are double-checking the model label on outputs, per the [Uni-1 selection clip](t:63|Uni-1 selection clip).

PBR map generation demo
Video loads on view

Seedance 2.0 is being driven by reference images to lock a custom style

Seedance 2.0 (via TopviewAI): A creator reports using a reference image from an original Midjourney style as the aesthetic anchor, with Seedance 2.0 adapting to it across animation, per the [reference-driven example](t:58|reference-driven example). In the same feed, another Seedance share frames the model as capable of “film studio” outputs (again tied to Topview availability), per the [availability note](t:104|availability note).

Reference image to animation
Video loads on view

Storyboard grids are showing up as the anchor for AI animation workflows

Storyboard-to-output pattern: A creator shares a storyboard grid as the pre-production artifact for an upcoming animation, showing character/scene variations organized before generation decisions, as seen in the [storyboard screenshot](t:132|storyboard screenshot).

The same thread context ties this planning step to Photoshop (beta) tooling—specifically “Rotate Object” and the follow-on “Harmonize” lighting step—suggesting some teams are mixing classic compositing prep with AI generation rather than treating generations as one-shot finals, per the [Photoshop beta mention](t:8|Photoshop beta mention).

Uni-1 frames + Ray-3.14 clips are being used as a “next-gen mixtape” format

Uni-1 + Ray-3.14 (Luma Labs): A creator frames the “burned CD / mix tape” tradition as a new format: generate frames with Uni-1, then turn them into clips with Ray-3.14 to make a personalized video for a friend, as described in the [mixtape workflow post](t:136|mixtape workflow post) and echoed by DreamLabLA’s reshare. The emphasis is less on a single hero shot and more on assembling a coherent audiovisual “gift artifact.”

AI mixtape montage example
Video loads on view

🛠️ Single-tool craft: Claude prompting, Gemini web-gen speed, and creator UX lessons

Today’s practical guidance skewed toward prompting and tool UX—especially Claude prompt structures and a fast Gemini demo that matters for designers prototyping web experiences.

Gemini 3.1 Flash-Lite (Google DeepMind): DeepMind demoed a browser experience where pages are generated on the fly—each view assembling in real time as you click, search, and navigate—positioning speed as the core UX feature for web prototyping, per the browser demo. For designers, this is less “make me a site” and more “generate the next screen instantly” as interaction happens.

Real-time website generation demo
Video loads on view

The clip shows rapid page construction across multiple navigations, which is the practical point for iterating IA, layout variations, and copy placement without pausing the flow, as demonstrated in the browser demo.

Claude prompting “playbook leak” circulates, with unverified internal-style templates

Claude (Anthropic): A thread claiming a “former Anthropic researcher” leaked an internal Claude prompting playbook is circulating, framing one common mistake as “burning 35% of Claude’s reasoning capacity,” and teasing a set of 10 house-style prompt patterns in the playbook claim thread and related reposts like the reposted leak claim and first principles teaser. The post is high-virality and light on provenance (no primary doc attached), but it’s influencing how creators structure briefs for longer reasoning tasks.

What’s actually actionable: Even without the full list, the shared examples emphasize front-loading constraints, prior attempts, and “where I’m stuck” before asking for output, per the playbook claim thread.

Net: treat it as a pattern library that’s useful even if the “leak” framing stays unverified.

Claude “Situation Brief” prompt: start with context, then attempts, then the stuck point

Claude prompting (pattern): The “Situation Brief” template being shared focuses on one change—don’t start with the ask; start with your situation, what you already tried, and where you’re stuck—alongside a claim of +41% more useful output in internal testing, as written in the Situation Brief prompt and echoed in the playbook claim thread. It’s essentially a structured creative brief for the model.

A copy-ready version from the post: “Here’s my context: [role, company, problem]. Here’s what I’ve already tried: [X, Y]. Here’s where I’m stuck: [Z]. Now help me think through this,” as shown in the Situation Brief prompt.

ComfyUI’s “too technical” barrier weakens as LLMs learn its JSON workflows

ComfyUI (workflow ergonomics): A creator argument gaining traction is that ComfyUI’s node-graph complexity matters less once you treat workflows as text—LLMs are already trained on large numbers of ComfyUI JSON graphs, making it more plausible to “vibecode” custom nodes and even a tailored UI layer on top, as described in the ComfyUI vibecoding take. The claim isn’t that ComfyUI becomes simpler, but that the interface stops being the bottleneck when an LLM can draft/modify the workflow scaffolding.

A Buffett-style Claude prompt template spreads for fast company analysis

Claude prompting (pattern): A role-and-criteria checklist prompt is making the rounds to emulate Warren Buffett-style screening—moat, capital allocation, pricing power, and a 10-year outlook—positioned as a way to compress heavy reading into a repeatable analysis workflow, per the Buffett prompt thread and the example “Business Quality Filter” snippet in the prompt excerpt.

It’s a good fit for creators doing brand/market research for pitches, scripts, or product narratives, because it forces a consistent rubric across companies rather than ad-hoc takes.

“DIY with AI” debate: tool access still doesn’t replace design judgment

AI-assisted design (craft debate): A working designer pushed back on “why pay a designer if AI can do it,” arguing that DIY logo/web/SEO attempts often underperform when stakes are high—even with AI tools—because taste, problem framing, and execution details remain the differentiators, as stated in the DIY vs pro argument and contextualized by the preceding harassment thread screenshot in the DIY vs pro argument.

It’s less about rejecting AI tools and more about what parts of the pipeline still fail without judgment—brief quality, constraint handling, and consistency across deliverables, per the DIY vs pro argument.


🤖 Character sheets, mechs, and 2D↔3D decisions

A steady stream of production-ready design sheets and 2D vs 3D comparisons—useful for character turnaround planning, modeling handoff, and animation direction.

Uni-1 workflow: consistent PBR maps + triplanar projection to speed 3D lookdev

Uni-1 (Luma Labs): A concrete 2D→3D acceleration workflow is getting shared: use Uni-1 to generate consistent Base Color / Normal / Displacement maps, then combine with triplanar or image projections to keep quality while moving faster, as described in the Workflow description and shown in the Workflow description.

PBR maps generation demo
Video loads on view

This matters for character/mech production because it turns “cool concept stills” into something closer to a usable material stack—so a modeler/texture artist can iterate on form without rebuilding surface detail from scratch. Luma also noted that if you’re using Luma Agents you may need to explicitly force Uni-1 selection (routing can switch models), per the Uni-1 selection steps.

BAT-ROID UNIT 07 design sheet: front/back/side plus face-unit and articulation callouts

BAT-ROID UNIT 07 (0xInk): A modeling-handoff-friendly character sheet drops with front/back/side views, a detail panel for the masked vs unmasked face unit, and labeled mechanical callouts (cape attachment, joints, emblem, feet), as shown in the Design sheet post.

For AI-assisted pipelines, this kind of sheet is also a strong “style bible” reference to keep iterations consistent across image/video generations, because it anchors silhouette, palette, and repeatable part names in one frame.

Type-7 Mech Unit blueprint: dimensions, component labels, and internal frame inset

Type-7 Mech Unit (0xInk): A technical blueprint-style mech sheet is shared with explicit dimensions and component labels (sensor head array, armor plating, core vent, pistons, cockpit access, articulated knee, multi-jointed hand), per the Blueprint post.

This format is directly reusable as a reference image for AI concept variants while still staying “buildable,” because the key constraints (proportions, joint logic, part taxonomy) are already pinned down.

Noodle-saucer cargo mech sheet: action poses, expressions, and cockpit callouts

Unit 01 “KIRU-KOSI” cargo mech (0xInk): A character sheet for a saucer-torso biped mech includes cockpit/sensor close-ups, action poses (including a “serving noodles” gag), and expression variants, as captured in the Character sheet post.

As a practical reference for AI creators, the “expressions + action” paneling is a useful pattern: it defines what the character does and how it “acts” on-camera, not only how it looks in a neutral turn.

Unit A-03 close-up: labeling, plating, and surface-detail reference

Unit A-03 (0xInk): A tight, high-detail mecha close-up gets posted that’s rich in surface cues (warning labels, panel seams, exposed neck mechanics, sensor glow), as seen in the Unit A-03 post.

For AI-driven design iteration, close crops like this are especially useful as reference inputs when you want the next gen to inherit believable “manufactured” texture and readable greeble density without re-describing every micro-detail in text.


📣 Marketing creatives: pitch decks, brand kits, and AI-native ads

Design + marketing automation showed up as the practical use case: deck generation, brand-consistent deliverables, and ad formats optimized for attention rather than realism.

Kimi Slides auto-generates investor decks from notes or uploaded files

Kimi Slides (Moonshot/Kimi): Creators are circulating Kimi Slides as a deck generator that turns “messy notes” into an investor-ready, editable presentation fast—claiming full decks in under 60 seconds and “5 minutes” to polish, with export to PPT/images highlighted in the [feature rundown](t:48|Feature rundown) and the [capabilities clip](t:283|Capabilities demo).

Capabilities demo
Video loads on view

File-to-deck workflow: Import a PDF/report and have it auto-structured into slides, as shown in the [import claim](t:48|Import claim) and demoed in the [file import clip](t:284|File import demo).
Prompt starters that map to real deliverables: The thread includes copyable “consulting deck” asks like “McKinsey-style market entry analysis for EVs” and “20-slide investor deck for a SaaS startup,” as listed in the [prompt list](t:259|Prompt list).

The only hard detail about where to try it is the link shared in the [try it link](t:256|Try it link), which points to the Product page.

Brand import decks: URL in, colors/type/logo auto-applied

Brand import (URL-to-style system): One concrete workflow described is pasting a company website URL and having the system pull colors, typography, and logo automatically so generated slides/assets start on-brand, according to the [brand import explainer](t:358|Brand import explainer).

Brand import from URL
Video loads on view

This is positioned as removing manual style-guide setup and back-and-forth for marketing teams, but the thread doesn’t spell out edge cases (multi-brand sites, subdomains, brand refresh drift) beyond the single demo.

Multi-agent “vibe design” decks pitch: one session, many specialized agents

Vibe design (agentic deck-building pattern): A thread frames a new deck-building loop where five specialized agents run in parallel (layout, typography, color, content, structure) to produce a polished pitch deck in a single session, per the [multi-agent claim](t:357|Multi-agent claim) and the [main teaser](t:172|Vibe design teaser).

Vibe design teaser
Video loads on view

Why marketers notice: The argument is that decks have had the same bottleneck as code—slow, expensive, agency-heavy—then agents compress the turnaround, as laid out in the [bottleneck framing](t:356|Bottleneck framing).
Adoption signal (still promotional): The thread asserts $7.5M raised and usage by “Google, McKinsey, Stanford,” per the [funding + logos claim](t:361|Funding and logos claim), but it doesn’t include independent benchmarks or before/after comparisons beyond the demo clips.

Prospect-personalized pitch decks: auto-research target brand and rebuild slides

Prospect-personalized decks (sales collateral tactic): The thread describes a personalization loop where you specify a target (example: “pitch Nike”), then the system researches the prospect’s brand and rebuilds slides to match their identity—framed as “10 prospects = 10 fully branded decks,” per the [prospect targeting claim](t:359|Prospect targeting claim).

There’s no attached artifact showing how the “research Nike” step is done (sources, brand rules, legal use of logos), so treat it as a workflow promise rather than a fully evidenced capability in the tweets.

Supplement ads shift toward one animated mascot and infinite script variations

AI-native mascot ads (performance creative pattern): A creator points to supplement brands allegedly doing $100k+/month using a simple animated character instead of UGC/influencers—one consistent identity, then endless variations in hooks/scripts and claims, as described in the [ad pattern thread](t:100|Ad pattern thread).

Cartoon supplement ad style
Video loads on view

The creative logic presented is “lower friction, less skepticism, instant attention,” with AI making iteration cheap enough to keep the mascot but swap messaging continuously.


🎧 AI music + sound experiments (Suno-era shortform scoring)

Light but present audio activity: creators posting AI-generated tracks and audio-first prompts, often paired with generative visuals for shortform pieces.

Freepik Spaces music-video workflow: lipsync-first, then Kling animation fills

Freepik Spaces (music-video workflow): Following up on Freepik stack (lipsync + multi-tool music video pipeline), a creator shared a more explicit end-to-end build: generating a song, then building a consistent-character music video inside a single Freepik “Space,” as described in workflow claim and reinforced by the “whole Space + prompts” share in space share.

Music video result
Video loads on view

Lipsync options named: The thread calls out OmniHuman 1.5 for camera moves/facial expressions and Veed Fabric 1.0 as another lipsync path, according to the tool callouts in space share.
Frame → clip pattern: It describes generating frames (Nano Banana 2/Pro mentioned) and then animating clips with Kling 3.0, per the step list in space share.

Access is distributed via an invite link embedded in the Freepik Space invite, while the public-facing proof of output is the clip shown in workflow claim.

Life: Part 1. Morning pairs Grok Imagine visuals with a Suno track

Life: Part 1. Morning (Suno + Grok Imagine): A short audiovisual piece titled “Life: Part 1. Morning” was posted as an example of a minimal stack—visuals credited to Grok/Imagine and music credited to Suno, per the creator note in audiovisual post and the tool attribution follow-up in tools credited. It’s a clean illustration of using one music gen plus one visual gen as a repeatable “episode” format (this one labeled Part 1).

Audiovisual clip excerpt
Video loads on view

The post doesn’t include a prompt or stem details, but it’s a concrete signal of creators treating AI music as a serial drop format rather than a one-off soundtrack, as implied by the “Part 1” framing in audiovisual post.

A detailed beatbox-only prompt shows how people are steering music models

Audio prompt craft: A full text spec for an a cappella beatbox performance was posted as a generation-style input—dry/intimate recording, no melodic instruments, complex syncopation, and a steady tempo “approximately 110 BPM,” as written in beatbox prompt text.

Beatbox-style reference clip
Video loads on view

This is the kind of prompt that tends to transfer across music tools because it describes arrangement constraints (kick/snare/hi-hat, fills, no pitched vocals) instead of naming a single model feature set, as shown by the structure and constraint list in beatbox prompt text.


💻 Local-first creator stack: hf-mount, ComfyUI simplification, and small-model chatter

Creator infrastructure chatter centered on local workflows: mounting huge remote storage locally, simplifying ComfyUI, and practical model formats for constrained VRAM.

Hugging Face ships hf-mount for “remote disk” workflows

hf-mount (Hugging Face): hf-mount was introduced as a way to attach a Hugging Face Storage Bucket, model, or dataset to your machine as a local filesystem—positioned as “remote storage 100× bigger than your local disk,” with read-write mounts for buckets and read-only mounts for models/datasets, as described in the Feature rundown.

Agent-friendly storage: The pitch explicitly calls out “agentic storage” use cases, where persistent filesystems double as long-lived memory/state for local or sandboxed agents, as framed in the Local AI framing and the Feature rundown.

ComfyUI’s new “APP” feature lowers the node-graph barrier

ComfyUI (local image gen): A Turkish creator highlights a new, simpler “APP” feature in ComfyUI, pitched at people who avoided it due to the node graph; the same walkthrough pairs it with Z-Image as a “fastest local” option for unlimited/free image generation workflows, per the Workflow walkthrough and follow-up Video link post.

What changes for creatives: The claim is that you can stay in a more app-like surface for common tasks (instead of building graphs), then still drop down to nodes when you need control, as summarized in the Workflow walkthrough.

Accomplish AI pitches a local-first agent runtime

Accomplish AI (open source): accomplish_ai is shared as a local-first agent setup that doesn’t require API keys; it’s framed as running fully on your computer (files stay on-device) while supporting scheduled tasks and controlling desktop apps (not only a browser), according to the Local agent rundown.

Why it’s notable: It’s positioned as a practical alternative to hosted “computer use” agents when privacy and offline-ish operation matter, based on the Local agent rundown.

Nemotron-Cascade-2 (NVIDIA ecosystem): A community signal points to Nemotron-Cascade-2-30B-A3B reaching #1 trending on Hugging Face, framed as part of an “ultra-efficient reasoning” moment, according to the Trending claim; adjacent chatter thanks the community for testing Nemotron variants in the Community note.

Creator relevance: The takeaway being circulated is that smaller/efficient reasoning models are getting real mindshare as local inference becomes a default part of production tooling, per the Trending claim.

Using Claude to “vibecode” ComfyUI workflows

ComfyUI workflow pattern: A local-first workflow claim is that LLMs (specifically called out with Claude) are already trained on the ecosystem’s JSON workflows, making it easier to generate/modify pipelines and even prototype custom nodes/UIs by prompt—so “ComfyUI being too technical won’t be a concern,” as argued in the Vibecode ComfyUI take.

What to watch: This shifts the pain point from learning node wiring to prompting and validating a workflow spec; the tweet’s core bet is that extensibility wins once the UI is LLM-shaped, per the Vibecode ComfyUI take.

8Q GGUF as a “fits under 48GB” default

GGUF quantization (local LLMs): A practitioner tip suggests that if you have <48GB VRAM, trying an “8q gguf” build can deliver a surprisingly “high-end” feel on constrained hardware—summed up as “Feels just like opus,” per the VRAM quantization tip.

Practical implication: This reinforces the pattern that format/quant choice (not just model choice) is often the decisive lever for local creative stacks—see the VRAM quantization tip framing.


🏁 What shipped: films, trailers, art editions, and creator releases

Finished work and public drops: festival wins, micro-series releases, new trailers, and edition sales—useful for spotting emerging formats and distribution venues.

AI-made mockumentary series Homo Geminus releases on Kweeks

Homo Geminus (Kweeks): Creator Jae Kingsley says their mockumentary show Homo Geminus is now live on the Kweeks micro-series platform, framing it as a “no experience, no resources” project enabled by AI—see the release note in Kweeks release post.

The practical signal for filmmakers is distribution: Kweeks is acting like a native home for short, episodic AI-first series, not just one-off clips, as implied by the “released on the platform” framing in Kweeks release post.

Claire Silver opens Mary’s Room editions as Basel begins

Mary’s Room (Claire Silver): Following up on Basel editions (Basel availability + structure), Claire Silver says editions are “now available” as Basel starts, as announced in Editions availability.

Mary’s Room edition preview
Video loads on view

Pricing context is visible on the collection page: the OpenSea listing shows a 1.00 ETH floor price for the “Mary’s Room Editions,” as summarized in Collection page.

Lacrimosa trailer showcases Uni-1 keyframe workflow

Lacrimosa (DreamLabLA): DreamLabLA spotlights a trailer called Lacrimosa by content artist Bryan Soegondo, noting it began as a 3D concept and that keyframes were made with Uni-1 from Luma—per the project credit in Trailer credit.

Lacrimosa trailer cut
Video loads on view

This is a clean “proof of pipeline” release: Uni-1 is being credited as the keyframe stage for a finished trailer, not just stills or tests, as stated in Trailer credit.

A personalized AI “mix tape” format emerges: Uni-1 frames to Ray-3.14 clips

AI “mix tape” (Luma Labs): A DreamLabLA creator describes making a personalized “next gen mix tape” as an AI video gift, explicitly naming Uni-1 for frames and Ray-3.14 for clips—see the tool breakdown in Toolchain credit.

AI mixtape montage
Video loads on view

What’s novel here is the format claim (giftable, personal, short) paired with a repeatable stack—Uni-1 for consistent still frames, then Ray for motion—using the same recipe described in Toolchain credit.

JunieLauX takes 1st place in ALCHEMIST at the [esc] Awards 2026

[esc] Awards (Escape AI Media): JunieLauX reports winning 1st place in the ALCHEMIST category and also mentions three nominations, positioning the awards as an “AI film world” prestige signal—details are in Awards win note.

The bigger takeaway is ecosystem validation: the post explicitly frames the awards as a peer-recognition venue with industry-facing judges (including a nod to John Gaeta), as described in Awards thanks note.

Escape AI Media shares Terminal Hallucination music video drop

Terminal Hallucination (Escape AI Media): Escape AI Media shares TERMINAL HALLUCINATION, positioning it as a “music video for the senses” by BLVCKLIGHTai, per the share in Music video share.

Terminal Hallucination clip
Video loads on view

For AI filmmakers, this is another example of release packaging: short-form music video with a strong title card + aesthetic lock, as visible in Music video share.

Life: Part 1. Morning posts as a Grok + Suno + Imagine drop

Life: Part 1. Morning (Grok + Suno + Imagine): Bennash posts an audiovisual piece titled “Life: Part 1. Morning,” later crediting the stack as Imagine + Grok + Suno, as stated in Life Part 1 post and clarified in Tool credit line.

Life Part 1 video
Video loads on view

This is a compact “tool-stamped” release pattern: the work ships with explicit attribution to the generative toolchain, making it easier to track which stacks are being used for finished music-video-like drops, per Tool credit line.


📅 Deadlines, meetups, and credit drops for creators

Time-sensitive creator opportunities today: festival submissions, meetups, and credit pools—plus community calls tied to major creator events.

Runway opens AI Festival 2026 submissions (deadline differs by source)

AI Festival 2026 (Runway): Runway says submissions are open “until April 20” in the submissions announcement, but the official festival page lists a March 31 deadline and a 3–15 minute length requirement, as detailed on the festival site.

AIF 2026 call for entries
Video loads on view

What they’re taking: Works using AI across Film, Design, New Media, Fashion, Advertising, and Gaming are explicitly called out in the submissions announcement, with finalists shown at NYC and LA gala screenings per the festival site.

The practical takeaway is that you should verify the cutoff on the site before planning a submission schedule, since the tweet copy and the site copy conflict.

Workshop launches with a $250k Gemini credits giveaway

Workshop (Launch promo): Workshop is being introduced as “cloud + on-device agentic AI,” with a stated giveaway pool of $250,000 in Gemini credits, as announced in the giveaway mention.

The post doesn’t include eligibility mechanics in the captured text, so the only hard details available here are the product positioning (“cloud + on-device”) and the total credit amount, both of which are in the giveaway mention.

Google I/O team asks creators to submit AI Studio apps for a showcase

Google I/O (AI Studio): A Google I/O-associated account is soliciting community submissions—“reply with your AI Studio app” plus a one-sentence story explaining how/why you built it—per the submission call.

This is effectively a lightweight intake for potential featuring around Google I/O, with the only stated requirement being an existing AI Studio app and a short build narrative as described in the submission call.

Hailuo schedules a London commercial meetup for ad creatives (Mar 27)

Hailuo AI (Meetup): Hailuo is running an “AI Commercial Meetup in London” on Friday, March 27 from 6–8PM GMT—positioned as an in-person evening for agency creatives, brand marketers, and AI builders, according to the event details.

Format: The agenda in the event details includes a fireside chat (“Is AI the Future of Advertising?”) plus an “AI Commercial Workshop,” with requests-to-join handled via their signup link in the same post.

This is framed as an ad-workflow networking + workshop night, not a product demo stream.

Curious Refuge opens waitlist for an all-access GenAI course membership

Curious Refuge (Education): Curious Refuge says it’s launching an “all-access membership” for its GenAI courses, explicitly framed around skills, connections, and community, with a waitlist open now per the membership waitlist.

No pricing or launch date is stated in the tweet; the time-sensitive element is the waitlist call itself as described in the membership waitlist.

Pictory promotes Pictory 2.0 alongside a March 25 AI-video webinar

Pictory 2.0 (Pictory): Pictory is promoting “Pictory 2.0” as an all-in-one workflow (Central, AI avatars, GenAI, brand kits, new timeline, script generator) in the product post, and the associated thread context references a live webinar scheduled for March 25, 11 AM PST.

The only clickable destination provided in the post is the signup page, which is positioned as the on-ramp for trying the product and (by implication) finding the webinar registration from the broader campaign.


📚 Research drops creatives will feel soon (video, vision-language, world evals)

Mostly papers + benchmarks relevant to generative media and agent reliability: fast audio-video generation architectures, spatial reasoning LVLMs, and evaluation suites for world models and formal reasoning.

daVinci-MagiHuman proposes a fast single-stream audio-video foundation model

daVinci-MagiHuman (GAIR/SII-GAIR): A new “Speed by Simplicity” release pitches a single-stream transformer that generates audio + video together (no multi-stream cross-attention), with code/model/demo surfaced in the paper + model links and expanded details on architecture + speed claims in the paper page.

Generated digital-human demo clips
Video loads on view

Why creatives will feel it: The framing is “human-centric” (talking-head / digital-human style outputs) with emphasis on sync and latency, which is the practical blocker for dialogue-heavy shorts, music-video performance clips, and realtime-ish previs workflows, as described in the model page.
Concrete speed signal: The model page claims generation on a single H100 at roughly 2s for 5s @ 256p and ~38s for 1080p, plus multilingual support, per the model page.

The tweets don’t include an independent benchmark artifact, so treat “SOTA” comparisons as provisional until you can reproduce runs from the repo/model card.

Omni-WorldBench shifts world-model evals toward interaction and causality

Omni-WorldBench (paper): A new benchmark argues current world-model evals overweight visual fidelity and underweight interaction effects; it introduces Omni-WorldSuite prompts plus an agent-based Omni-Metrics framework to score temporal dynamics and causal response, per the paper screenshot and the paper page.

Why this matters for video creators: If you care about “camera/object control” and edits that respect physics/intent across time (not just pretty frames), interaction-centric eval pressure tends to move models toward the behaviors needed for interactive storyboards, game-cinematics, and controlled previz.
Useful framing: The benchmark explicitly separates “looks good” from “responds correctly,” which is a clearer north star for tools that want shot-to-shot continuity and action-conditioned outcomes.

The paper page notes evaluation across 18 models, but the tweets don’t list model-by-model rankings—so the immediate value is the eval design, not a leaderboard.

Perceptio adds explicit depth and segmentation tokens to LVLM reasoning

Perceptio (paper): A new LVLM approach has the model emit spatial tokens first (SAM2-style segmentation tokens plus VQ-VAE depth tokens) and then answer; the abstract and training recipe are shown in the paper highlight, with the full writeup in the paper page.

What it’s trying to fix: Many vision-language models can describe an image but struggle with “where exactly?” tasks; Perceptio’s bet is that making 2D/3D structure explicit in the generation stream improves grounding, per the paper highlight.
Creator-facing downstream: Better spatial grounding tends to show up later as more reliable object/region selection for editing pipelines (masking, relighting, layout-aware compositing) and more trustworthy “point to the thing” interactions in creative tools.

The tweet includes benchmark deltas in the abstract, but there’s no demo of a creative product surface yet—this reads as a near-term ingredient for tool builders.

Generalized discrete diffusion from snapshots shows a path to faster 3D recovery

Generalized discrete diffusion (paper): A short clip shared with the paper title suggests a diffusion approach that reconstructs/denoises 3D structure from “snapshots,” as previewed in the paper clip.

Rotating point-cloud diffusion demo
Video loads on view

For creative pipelines, the core promise would be quicker 3D asset blocking (point clouds → meshes/materials later) from sparse captures or partial scans; the tweet itself doesn’t include a link, metrics, or a released implementation, so this is mostly an early visual signal rather than a shippable workflow.

LongCat-Flash-Prover trains tool-using RL for Lean4 formal reasoning

LongCat-Flash-Prover (paper): A 560B-parameter MoE model targets native formal reasoning in Lean4 using agentic tool-integrated reasoning and a stabilization method called HisPO, with headline benchmark numbers summarized in the paper screenshot and detailed in the paper page.

Why creatives might care (indirectly): Formal reasoning improvements tend to surface as more reliable “prove constraints / validate logic” components inside agent pipelines—e.g., checking rule consistency in interactive fiction, game logic, or complex branching narratives before rendering/production.
Notable numbers (from the paper page): The paper page claims 97.1% pass@72 on MiniF2F-Test and strong results on ProverBench/PutnamBench, per the paper page.

This is research-first; no creator tool integration is mentioned in the tweets.

NVIDIA releases a pipeline for 196K temporally grounded video MCQs

Long Grounded Thoughts (NVIDIA): NVIDIA shared a pipeline to generate 196K temporally grounded video multiple-choice questions, positioned as infrastructure for evaluating video understanding over time, per the release mention.

The tweet doesn’t include the dataset card, examples, or a demo artifact in-line, but the direction is clear: more standardized long-horizon video Q/A data usually becomes a downstream lever for better story-relevant video understanding (continuity, causality, and “what changed between frames?”) in the models creatives end up using.


🛡️ Provenance, “AI-free” claims, and creator credit norms

Discourse today centered on what “AI-free” even means in modern pipelines, plus norms like watermarking/attribution and escalating creator-vs-anti-AI conflict.

Del Toro “AI-free” rhetoric collides with AI-augmented VFX pipeline reality

AI-free claims (film pipelines): A new round of debate hit after a widely shared recap contrasted Guillermo del Toro’s public “F*ck AI” stance with industry reporting that Frankenstein’s pipeline may still have used AI-augmented compositing / production design tooling, raising the practical question of what “AI-free” can mean once mainstream software bakes in AI features, as framed in the Del Toro contradiction post and expanded in the Blog breakdown.

The thread’s creator-facing implication is definitional, not moral: teams may need to distinguish “no generative AI” from “no AI-assisted features” (roto/denoise/matchmove-style assists), because the latter is increasingly hard to assert without auditing every step of the toolchain.

“License your face” economics resurface with a $1M likeness claim

Likeness licensing (synthetic actors): A thread claims a 22-year-old bartender was paid $1,000,000 for their likeness, framing it as evidence that AI-generated film economics are already here (“no camera, no script”)—see the Likeness breakdown and the companion clip in the Face product claim.

Likeness economics montage
Video loads on view

Even if treated as promotional without contract details, the creator-relevant piece is the norm-setting: payment + consent is being positioned as the clean path for synthetic performances, rather than scraping or “AI-free” absolutism.

Watermarks reframed as low-effort provenance friction, not perfect protection

Watermarking norms (creator credit): A small but sticky argument is spreading that watermarks don’t need to be unremovable to be useful; they mainly add friction and reduce lazy repost-theft, effectively forcing would-be thieves to do extra work before passing something off as theirs, per the Watermark take.

The practical angle for creatives is that watermarking is being positioned less as DRM and more as a lightweight provenance cue (and a social signal) in feeds where attribution is otherwise optional.

Creator backlash discourse hardens around harassment and “AI slop” framing

Creator backlash dynamics: Posts attacking “people who hate AI” as uncreative harassment drivers are pulling engagement, as shown in the Anti-AI callout, with follow-on threads illustrating the recurring pattern of detractors dismissing outputs as “AI slop” and creators responding that tool access doesn’t equal taste or results, as captured in the DIY vs pro reply screenshot.

The signal here is social: the dispute is less about model capability and more about credit, legitimacy, and who gets to claim ‘real’ craft in public creator spaces.

!

While you're reading this, something just shipped.

New models, tools, and workflows drop daily. The creators who win are the ones who know first.

Last week: 47 releases tracked · 12 breaking changes flagged · 3 pricing drops caught

On this page

Executive Summary
Feature Spotlight: Sora app shuts down (and what that means for creators)
🧭 Sora app shuts down (and what that means for creators)
OpenAI sunsets the standalone Sora app, with creators flagging API fallout and ChatGPT migration
Creators mourn Sora’s in-feed remixing culture as a low-friction onramp
Calls grow for OpenAI to open-source Sora as the app shuts down
Concern: automated short-form accounts built on Sora may break when the app goes away
Sora shutdown gets read as a cost signal: AI video may consolidate to a few providers
“Last day” sentiment: requests to relax Sora restrictions briefly before shutdown
🎬 Seedance vs Kling era: realism, style signatures, and team workflows
Kling 3.0 prompt for “filming a Zoom call on a laptop screen” realism + gag beat
Seedance 2.0’s “signature look” becomes a selection factor vs Kling 3.0 stacks
Timecoded shotlists + continuity rules are becoming the anti-drift prompt format
Topview Agent V2 + Seedance 2.0 adds long-form continuity and continuous music claims
Higgsfield pushes a $1,000,000 likeness deal as “AI film is already here” proof
Kling AI Team Plan highlights 15-person collaboration and shared asset management
Simple prompts are a Kling 3.0 strength (example: combat robot street scan)
Hailuo Light Studio pushes web-only relighting for smoother video transitions
Seedance 2.0 is being marketed by creators as “Arcane-like” cinematic animation
Seedance 2.0 is being used as a style-adapter with reference images
🖼️ Image model heat check: Midjourney v8, Uni‑1 usage, and photoreal leaps
Midjourney v8 is becoming the default for fashion-editorial stills
A new, unnamed image model is being teased as a photoreal jump
Uni-1 is being used to generate usable material maps, not just pretty images
Canva demonstrates one-click layer separation for fast compositing
ComfyUI’s new “APP” mode is pitched as lowering the node barrier
Luma clarifies model routing in Agents and how to guarantee Uni‑1 outputs
Creators are using “GPT Image 1.5” inside Firefly for cinematic product-diorama shots
Photoshop beta ships Rotate Object for 2D rotation workflows
Firefly Boards is being used as a cross-model ideation workspace
Open-source identity jokes are bleeding into creator timelines
🧪 Post tools that save shots: relighting, layering, and polish
Freepik Relight brings controllable lighting (and reference lighting transfer) to images + video
Canva shows one-click “separate into layers” for fast compositing
Photoshop (beta) adds Rotate Object to fix pose/perspective without re-gen
Uni-1 is being used to generate PBR map sets (normal/displacement) for faster lookdev
A “frosted glass with a clear slit” prompt is spreading as a reusable post-look
Hailuo Light Studio spotlights relighting and shadow control as a polish step
Character-swap quick cuts are becoming a lightweight motion-consistency check
🧾 Prompts & SREFs creators are actually saving (Midjourney + Nano Banana)
Copy-paste prompt for frosted glass with one crystal-clear window
Macro product-photo schema: miniature weather inside antique mechanical objects
Spec-sheet prompting is spreading: explicit constraints to fight drift
Black-and-white xerox zine prompt for punk fashion editorials
Mixed-media portrait template: realism plus sketch, newsprint, and acrylic splatter
Midjourney SREF 1326912768: neon retro-wave cyberpunk manga vibe (Niji 6)
Midjourney SREF 3368833261: soft-focus fashion-film haze and golden backlight
Midjourney SREF 3782619: gothic-cyberpunk ink portraits with fractured geometry
Nano Banana 2: 9:16 aspect ratio tip for better vertical portraits
Prompt sensitivity: swapping one object noun caused a 0.97-point score swing
🧠 Agent builders’ toolkit: Claude Code, AgentScope, GitAgent, and “done-not-demo” agents
AgentScope (Alibaba DAMO) open-sources a production-ready agent framework
Claude Code adds auto mode to reduce approval friction
Accio Work pushes local-first desktop agents with sandbox + approvals
GitAgent pitches a universal “Docker for agents” folder spec + adapters
Hugging Face ships hf-mount to mount buckets/models as a local filesystem
Linear says “issue tracking is dead” and leans into agent-driven workflows
Zhipu AI releases ZClawBench for real-world agent evaluation
Agentation gets a “use it all the time” endorsement
🧩 Full-stack creator workflows (one canvas, consistent characters, music video stacks)
Luma Agents push “one canvas” filmmaking with consistent characters across scenes
Freepik Spaces music-video stack leans on lipsync plus consistent characters
A trailer pipeline uses Uni-1 keyframes to turn a 3D concept into a cut
Creators are starting to avoid Seedance’s telltale look by stacking Kling + Nano Banana
Uni-1 is being used to generate PBR map sets for faster 3D asset lookdev
Seedance 2.0 is being driven by reference images to lock a custom style
Storyboard grids are showing up as the anchor for AI animation workflows
Uni-1 frames + Ray-3.14 clips are being used as a “next-gen mixtape” format
🛠️ Single-tool craft: Claude prompting, Gemini web-gen speed, and creator UX lessons
Gemini 3.1 Flash-Lite generates websites in real time as you click and search
Claude prompting “playbook leak” circulates, with unverified internal-style templates
Claude “Situation Brief” prompt: start with context, then attempts, then the stuck point
ComfyUI’s “too technical” barrier weakens as LLMs learn its JSON workflows
A Buffett-style Claude prompt template spreads for fast company analysis
“DIY with AI” debate: tool access still doesn’t replace design judgment
🤖 Character sheets, mechs, and 2D↔3D decisions
Uni-1 workflow: consistent PBR maps + triplanar projection to speed 3D lookdev
BAT-ROID UNIT 07 design sheet: front/back/side plus face-unit and articulation callouts
Type-7 Mech Unit blueprint: dimensions, component labels, and internal frame inset
Noodle-saucer cargo mech sheet: action poses, expressions, and cockpit callouts
Unit A-03 close-up: labeling, plating, and surface-detail reference
📣 Marketing creatives: pitch decks, brand kits, and AI-native ads
Kimi Slides auto-generates investor decks from notes or uploaded files
Brand import decks: URL in, colors/type/logo auto-applied
Multi-agent “vibe design” decks pitch: one session, many specialized agents
Prospect-personalized pitch decks: auto-research target brand and rebuild slides
Supplement ads shift toward one animated mascot and infinite script variations
🎧 AI music + sound experiments (Suno-era shortform scoring)
Freepik Spaces music-video workflow: lipsync-first, then Kling animation fills
Life: Part 1. Morning pairs Grok Imagine visuals with a Suno track
A detailed beatbox-only prompt shows how people are steering music models
💻 Local-first creator stack: hf-mount, ComfyUI simplification, and small-model chatter
Hugging Face ships hf-mount for “remote disk” workflows
ComfyUI’s new “APP” feature lowers the node-graph barrier
Accomplish AI pitches a local-first agent runtime
Nemotron-Cascade-2 trends as “efficient reasoning” gets attention
Using Claude to “vibecode” ComfyUI workflows
8Q GGUF as a “fits under 48GB” default
🏁 What shipped: films, trailers, art editions, and creator releases
AI-made mockumentary series Homo Geminus releases on Kweeks
Claire Silver opens Mary’s Room editions as Basel begins
Lacrimosa trailer showcases Uni-1 keyframe workflow
A personalized AI “mix tape” format emerges: Uni-1 frames to Ray-3.14 clips
JunieLauX takes 1st place in ALCHEMIST at the [esc] Awards 2026
Escape AI Media shares Terminal Hallucination music video drop
Life: Part 1. Morning posts as a Grok + Suno + Imagine drop
📅 Deadlines, meetups, and credit drops for creators
Runway opens AI Festival 2026 submissions (deadline differs by source)
Workshop launches with a $250k Gemini credits giveaway
Google I/O team asks creators to submit AI Studio apps for a showcase
Hailuo schedules a London commercial meetup for ad creatives (Mar 27)
Curious Refuge opens waitlist for an all-access GenAI course membership
Pictory promotes Pictory 2.0 alongside a March 25 AI-video webinar
📚 Research drops creatives will feel soon (video, vision-language, world evals)
daVinci-MagiHuman proposes a fast single-stream audio-video foundation model
Omni-WorldBench shifts world-model evals toward interaction and causality
Perceptio adds explicit depth and segmentation tokens to LVLM reasoning
Generalized discrete diffusion from snapshots shows a path to faster 3D recovery
LongCat-Flash-Prover trains tool-using RL for Lean4 formal reasoning
NVIDIA releases a pipeline for 196K temporally grounded video MCQs
🛡️ Provenance, “AI-free” claims, and creator credit norms
Del Toro “AI-free” rhetoric collides with AI-augmented VFX pipeline reality
“License your face” economics resurface with a $1M likeness claim
Watermarks reframed as low-effort provenance friction, not perfect protection
Creator backlash discourse hardens around harassment and “AI slop” framing