ChatGPT Atlas ships on macOS – 3 platforms coming soon, no dates

Executive Summary

OpenAI just turned the browser into an agent console. ChatGPT Atlas lands today on macOS and reframes web work as delegating tasks, not juggling tabs. Windows, iOS, and Android are “coming soon,” but without dates; creators estimate 90–95% of potential users are waiting, which blunts day‑one impact even as the direction feels right.

Inside Atlas, the address bar becomes a single intent box—“Ask ChatGPT or type a URL”—with an Agent mode and tab‑level voice and text commands. Early demos show end‑to‑end actions like “Complete Air France booking,” plus calendar and travel‑history reviews, and memory‑aware suggestions that follow you across sessions. If reliability holds, that means research, rights clearances, and asset gathering can run in the background while you steer outcomes, not clicks.

The comparison started immediately: Perplexity’s Comet targets the same “talk to the web” use case, and voice‑first, engine‑agnostic approaches argue workflow design may matter more than model branding. In parallel, Google’s Vibe Coding quietly lit up in AI Studio with free access and tiles for Veo, Search, and Maps, underscoring how agentic browsing and prompt‑to‑production tooling are rapidly converging.

Feature Spotlight

ChatGPT Atlas: AI browser for agentic web work

OpenAI launches ChatGPT Atlas on macOS: an AI browser that ‘talks to the web’ with agent shortcuts, voice/text tab commands, and memory—Windows/iOS/Android ‘coming soon’. Creators eye faster research, bookings, and production ops.

Cross‑account story: OpenAI unveiled Atlas, an AI‑native browser for agentic browsing and tab control. Multiple tweets show the livestream card, macOS‑only availability at launch, screenshots, and early creator reactions.

Jump to ChatGPT Atlas: AI browser for agentic web work topics

📑 Table of Contents

🌐 ChatGPT Atlas: AI browser for agentic web work

Cross‑account story: OpenAI unveiled Atlas, an AI‑native browser for agentic browsing and tab control. Multiple tweets show the livestream card, macOS‑only availability at launch, screenshots, and early creator reactions.

OpenAI launches ChatGPT Atlas, an AI‑native browser, on macOS; other platforms “coming soon”

OpenAI unveiled ChatGPT Atlas, positioning it as a browser built around agentic web work. It’s available now on macOS, with Windows, iOS and Android promised next but without an ETA Livestream replay. A pre‑event card shows the speaker lineup and the “Introducing ChatGPT Atlas” branding Announcement screenshot; one creator summarized the launch as a new browser with sidebar assistance, automation, and memory features Feature rundown.

Livestream card

For AI creatives, the key promise is turning routine web tasks into delegated actions directly in the browser rather than copy‑pasting across apps.

Atlas showcases agentic browsing: “Ask ChatGPT or type a URL,” agent mode, and tab voice/text commands

A first look at the Atlas UI reveals a unified entry point (“Ask ChatGPT or type a URL”) and task shortcuts like “Agent mode,” “Complete Air France booking,” and calendar/travel history reviews, hinting at end‑to‑end automations inside tabs Browser UI. Early reactions describe this as “talking to the web,” reframing browsing from clicking to delegating Creator takeaway. A creator rundown also highlights real‑time suggestions and memory‑aware assistance layered into navigation Feature rundown.

Atlas UI screenshot

This matters for story‑driven workflows—research, scheduling, rights clearances, and asset gathering—where agents can execute micro‑tasks while you focus on creative intent.

Mac‑only rollout draws creator pushback; “90–95% of users excluded” at launch

While Atlas is live on macOS, creators criticized the staggered release, estimating that 90–95% of the userbase is effectively left waiting for Windows and mobile “coming soon” Mac only take. The official stream confirmed the macOS‑first availability without concrete dates for other platforms Livestream replay.

Announcement card

For teams working cross‑OS in post and production, the delay complicates standardizing on Atlas for agent workflows until parity arrives.

Creators weigh Atlas against Perplexity Comet for agentic browsing

The community quickly framed the day as a head‑to‑head between OpenAI’s Atlas and Perplexity’s Comet, with creators openly asking which to adopt Choice question and flagging a “big test” for Comet Comet test note. Others noted voice‑first, agentic browsing experiences that aim to work across engines, suggesting that workflow design may matter more than the specific model vendor Voice browser demo. For creatives, the evaluation axis is pragmatic: tab control, memory, and reliability over raw chat quality.

Pre‑event confusion over “Aura” name gives way to Atlas branding at launch

Ahead of the stream, creators questioned whether the browser would be “Aura,” reflecting earlier rumors and naming tests Pre‑event question. The livestream established ChatGPT Atlas as the shipping brand and detailed its macOS availability Livestream replay.

Announcement card

Naming churn aside, the final positioning centers on an AI‑first browser for automated, context‑aware web tasks.


🎛️ Runway’s node Workflows and model fine‑tuning

Runway adds node‑based Workflows and opens an enterprise fine‑tuning pilot. This is new vs yesterday’s general buzz: today emphasizes chaining models/modalities and a pilot form for customized video models. Excludes Atlas (feature).

Runway opens model fine-tuning pilot for custom video generators

Runway announced a Model Fine-tuning pilot so teams can adapt its generative video models to their own data and use cases, pitching lower compute/data needs and vertical fits spanning entertainment, robotics, education, life sciences, and brands. The enterprise interest form is live. Announcement thread Runway product page

Fine-tuning page graphic

Runway frames this as a self-serve path to new inputs/outputs and tighter style/identity control, with pilot access gating ahead of a broader rollout. A follow-up post reiterates sign-up availability for early partners. Follow-up link

Runway unveils node Workflows to chain models and modalities

Runway introduced a node-based Workflows editor that lets creatives chain multiple models, modalities, and intermediary steps into custom tools and generation pipelines. It’s in early access for Creative Partners and Enterprise, with wider availability "coming soon." Release thread Early access note

Workflows early access shot

Designed for precision control over multi-stage setups (e.g., reference handling, text/image/video nodes, post steps) inside one canvas, the feature targets tighter iteration loops for ads, trailers, and narrative beats. Creators highlighted the ability to build their own reusable tooling directly inside Runway as the key shift. Release thread


🎬 Vidu Q2: ref‑to‑video, 5‑minute extends, cheap clips

Vidu Q2 rolls out Reference‑to‑Video, timeline extends, and broader distribution. New details include 5‑minute total extend and ~3¢/2s pricing on Runware. Excludes Atlas (feature).

Vidu Q2 ships Reference‑to‑Video and 5‑minute Video Extend

Vidu Q2 rolls out Reference‑to‑Video with promises of better character consistency and speed, plus a new Video Extend workflow that lets you grow a take segment‑by‑segment up to a 5‑minute total. The extend UI shows granular segment control (e.g., 7s blocks), aligning with creator timelines and long‑take storytelling Launch post.

Video extend UI

For filmmakers and ad teams, this pairs identity‑faithful R2V with practical runtime control, reducing re‑gens when you need to lengthen a shot mid‑edit Feature images.

Runware adds Vidu Q2 with ~$0.03 per 2s 360p clips

Runware launches Vidu Q2 with entry pricing from roughly 3 cents per 2‑second clip at 360p on both turbo and pro modes, widening access for cheap iteration and API workflows Runware launch. Developers can spin up tests directly from the catalog and scale via API as needed Runware models.

Vidu’s #ViduQ2OnStage challenge opens with credits and cash prizes

The #ViduQ2OnStage campaign invites creators to submit Q2‑powered clips for a chance at platform credits and cash rewards, aiming to spotlight cinematic motion and multi‑shot storytelling with the new tools Contest page.

Contest poster

Showcase threads already highlight Q2’s R2V and start/end workflows for dynamic, ad‑style pieces, signaling early creative traction Creator showcase.


🧑💻 Google AI Studio’s Vibe Coding goes live

Google DeepMind’s Vibe Coding experience is now visible in AI Studio with a “Build your ideas with Gemini” flow and Build‑area tile. Posts highlight free access and a push to unify dev tools. Excludes Atlas (feature).

Vibe Coding is live in Google AI Studio, with free access

Google’s AI-first Vibe Coding experience has appeared in AI Studio’s Build area and is accessible at no cost, following up on tease. Screenshots show the "Build your ideas with Gemini" flow and a visible "Vibe code GenAI enabled apps" tile, confirming availability today Build screenshots, with posts also calling out that it’s free Free launch claim and sharing the direct entry point Build tile card AI Studio sign-in.

AI Studio build page

Creative tiles spotlight Veo animation, video gen, and Google data hooks

Early UI captures highlight creator-forward presets inside Vibe Coding’s Build grid, including “Animate images with Veo,” “Prompt based video generation,” “Generate images with a prompt,” and data hookups like “Use Google Search data” and “Use Google Maps data,” giving filmmakers and designers fast paths from idea to working prototypes Feature grid. This aligns with Google’s positioning of Vibe Coding as a prompt-to-production accelerator Promo thread.

  • Visual tiles seen: Veo animation, video generation, image generation, Search/Maps data, analyze images, aspect ratio control Feature grid.

Feature tiles grid

Early UX verdict: a clean “Describe your idea” flow impresses creators

Creators praise the simplicity of AI Studio’s Vibe Coding entry point—one box to "Describe your idea," Gemini model selection inline, and a clear Build handoff—calling it "really good" and "nice and simple" for getting from prompt to production Build screenshots. The official promo frames it as an AI-first coding experience designed exactly for that faster loop Promo thread, with UI captures corroborating the streamlined approach Interface screenshot.

Gemini build screen

Google signals plan to unify AI developer tools under AI Studio

Alongside the Vibe Coding launch, Google’s Logan Kilpatrick said the team is "trying to unify all AI Developer stuff under AI Studio," hinting that fragmented utilities could consolidate into the same Build surface—an appealing prospect for creatives stitching voice, video, and agent workflows together Unify dev tools. Community speculation even points to potential julesagent-style integrations down the line, though that remains aspirational Unify dev tools.

Unification thread


🎥 Camera paths, identity lock, and performance control

New tools for directing motion and performance: draw camera paths (Ray3), lock character identity in ads (Lovart+Veo 3.1), and avatar micro‑expressions (OmniHuman). Creator notes on holding emotion in 12s shots. Excludes Atlas.

Drawn camera paths arrive in Dream Machine via Luma Ray3

Luma highlighted Ray3’s visual annotation for Dream Machine, letting creators sketch motion paths, sweeping turns, and dynamic reframes that the camera then follows Feature demo. This gives directors a tactile way to block shots without verbose prompt juggling, speeding up previs and iteration for complex moves.

Lovart + Veo 3.1 lock a spokesperson’s identity from a single reference

Lovart is pitching ad‑grade character continuity on Veo 3.1, claiming you can keep the same actor and outfit across styles and scenes from a single reference image Capability brief. For brand work, that means fewer pickup shots and a smoother narrative arc when style or setting changes mid‑spot.

OmniHuman 1.5 adds precise micro‑expressions and gesture control

BytePlus is pushing emotionally aligned performance control: OmniHuman 1.5 syncs tone, micro‑expressions, and gestures to audio so digital hosts feel like they’re acting, not lip‑flapping Product post. It’s aimed at ecommerce presenters, ads, and shortform where subtle eye and mouth cues sell realism.

Hedra brings Start/End Frames to Veo 3.1 to lock openings and closings

Hedra Labs users can now set Start and End Frames on Veo 3.1, giving directors explicit control over a video’s first and final beats while the model interpolates between them Feature tease. It’s a simple way to anchor story structure, title cards, or logo landings with less post‑tweaking.

Sora 2 Pro lands on Leonardo; creators lean on 12‑second stillness for emotion

Leonardo now offers Sora 2 Pro with single shots up to 12 seconds, giving scenes room to breathe and feel human Availability note. Following up on length heuristic that longer clips read as more “real,” early reactions praise how holding stillness amplifies emotional believability Creator reply.

Hailuo 02 keeps subjects framed during pans; end‑frame control looks strong

A scaling test shows Hailuo 02 holding all subjects in frame during a lateral pan—traditionally a failure mode for T2V models that lose composition mid‑move Panning test. Another creator calls its Nano Banana end‑frame workflow “awful good,” pointing to reliable target‑frame alignment End‑frame result.

Costumed dog end-frame


🎙️ Pro audio: noise‑free VO and accessible voices

ElevenLabs adds video‑format Voice Isolator and expands its social impact access flow. A meme establishes the “leaf blower test” as a community quality bar. Excludes Atlas.

ElevenLabs Voice Isolator adds video support with same‑format export

ElevenLabs extended Voice Isolator to handle video I/O: upload any audio or video and get the cleaned output back in the same format, targeting film, podcast, and social workflows Feature announcement. Following up on Creator workflow where ElevenLabs featured in VO/score, this tightens the post chain for creators by removing a round‑trip to separate NLE noise tools Platform note.

ElevenLabs Impact Program opens direct applications for free voices

ElevenLabs made its assistive‑voice access simpler: individuals with permanent speech loss—or their clinicians—can now apply directly on the site without codes or payment details, lowering friction for accessibility use cases Impact update.

Impact application button

The “leaf blower test” becomes the go‑to benchmark for noise removal

A community meme anoints the “leaf blower test” as the new Will‑Smith‑spaghetti for audio isolation, informally standardizing a tough baseline (broadband, non‑stationary noise) for tools like Voice Isolator to clear Benchmark meme. The timing coincides with ElevenLabs’ video‑format support, signaling what users will expect in real‑world post scenarios Feature announcement.


🪄 Prompt playbook: particles, plush toys, and freaks

A heavy day for creative recipes: Grok VFX dissolves, plush toy stills, Halloween ‘freak’ portraits, and MJ v7 params, plus mood‑first Grok animations. Excludes Atlas.

Grok Imagine recipe: cinematic ash‑to‑particles dissolve effect

A creator shared a precise VFX prompt for Grok Imagine that turns a subject into drifting ash with realistic particle breakup, shallow depth of field, and tense, high‑contrast lighting suited for dramatic beats Prompt details. The guidance also suggests a slow push‑in or locked camera to sell the transformation and keep attention on the physics‑like dispersion.

“Plushified Worlds” prompt makes velvety toy‑style 3D stills

A reusable prompt template for “Plushified Worlds” yields squeezable, velvet‑textured 3D characters (front‑facing, pastel palette, soft ambient light), with examples like a pufferfish, penguin chef, baby dragon, and cactus Prompt details. Creators are already adapting the template across models for consistent, toy‑like renders Community examples.

Plush toy examples

Grok Imagine excels at romantic, expressionist, and dance‑driven moods

Creators highlight Grok Imagine’s control over tone: romantic and sensual sequences that “feel alive” Romantic tone, stark homages to German Expressionism with shadow‑first compositions Expressionism homage, and poised ballet moments where fabric and gesture read as humanly subtle Ballerina example. The consensus is that Grok’s motion and lighting cues carry unusual emotional weight for prompt‑driven video Vampiric allure.

Halloween “freak” portraits: one prompt, endless creatures

A single bracketed‑subject recipe (e.g., [witch], [vampire]) spawns grotesque, high‑style portraits; recommended params: chaos 17, AR 2:3, exposure 33, raw on, stylize 1000 Prompt settings. Community QTs show wide variation—from spiked masks to surreal dental horror—demonstrating strong style transfer and identity room within the same seed space Community collage.

Freak portrait examples

New MJ v7 recipe (chaos 7, sref + sw 500) yields cohesive anime set

A fresh Midjourney v7 parameter combo—--chaos 7, --ar 3:4, --sref 2908399358, --sw 500, --stylize 500—produces a tight anime collage with recognizable motifs and color continuity across frames Parameter demo, following up on prior recipe that leaned chaos 8 for a prismatic look.

MJ v7 collage

Veo 3.1 prompt blueprint for a dawn car spot

A detailed Veo 3.1 prompt outlines a full 30–45s automotive spec: low‑angle tracking on a foggy coastal highway at dawn, intercuts to drone overheads and UI‑lit interiors, then a cliffside sunrise hero frame with title card Veo prompt. Screenshots show creators porting similar multi‑shot prompts into graphical tools with reference slots and editable prompt blocks to iterate framing and pacing Prompt UI.

Veo prompt UI


🛠️ Where to run top video models

Access updates for filmmakers: Sora 2 Pro lands on Leonardo (12s shots), and Runware routes top video/image models via Together Compute. Excludes Atlas.

Leonardo adds Sora 2 Pro with 12‑second single shots

Leonardo now supports Sora 2 Pro, enabling single shots up to 12 seconds—long enough for pauses, breathing room, and more human-feeling performances Feature announcement. Creators note the longer stillness raises emotional realism, and the addition rounds out Leonardo’s mix alongside Veo 3.1, Kling and Motion 2.0 Creator feedback, Suite context.

Vidu Q2 goes live on Runware with ~3¢ per 2s at 360p

Runware has added Vidu Q2 with short‑clip pricing starting around 3 cents per 2 seconds at 360p on both turbo and pro tiers Pricing and launch. The model supports Reference‑to‑Video for identity consistency and a Video Extend workflow that builds one seamless take up to 5 minutes total Feature overview.

Video extend UI

Runware brings top video and image models to Together Compute

Runware says its top‑rated video and image models have "landed" on Together Compute, promising strong pricing via a custom inference stack Partnership note. Creators can browse and launch directly through Runware’s catalog to route workloads where it’s cheapest and fastest models catalog.

Hailuo speeds up Veo 3.1 generations for creators

Hailuo has enabled “fast generations” for Veo 3.1, tightening turnaround for text‑to‑video runs Speed note, following up on Hailuo Veo 3.1, which brought 8s 720/1080p with audio. A recent shot‑list demo shows the broader production flow Hailuo supports for T2V and variations Workflow preview.

Hailuo shot list UI


📅 Awards, screenings, and showcases

Calls and meetups relevant to creatives: OpenArt MVA timeline and prizes, Kling’s Tokyo ceremony lottery, and a ComfyUI community showcase. Excludes Atlas.

OpenArt MVA: Nov 16 deadline, Times Square billboards and artist shoutouts

Nov 16 is the submission deadline for the OpenArt Music Video Awards, with Times Square billboards and artist shoutouts highlighted for winners, following up on OpenArt MVA $50k prize pool and sponsor. The program page has entry rules and songs, and organizers encourage personal, emotion‑driven stories Music video awards, while the call thread reiterates prizes and visibility perks Awards call.

Kling NEXTGEN Tokyo awards ceremony opens 15-seat attendee lottery

Kling will host the NEXTGEN Creative Contest awards ceremony in Tokyo on Oct 29 and is inviting 15 creators via a free ticket lottery; applications are open now Ceremony invite. Fill the bilingual Google form to enter; winners must be able to attend in Tokyo on the date Application form.

Event banner

Bionic Awards AI Creator Showcase set for Dec 4 in London

The Bionic Awards AI Creator Showcase will screen at Rich Mix cinema on Dec 4, with creators planning meetups for chats and drinks afterward Event invite. Tickets are available directly from the venue’s site Cinema tickets.

Event poster

ComfyUI Community Showcase with Matty Shimura goes live

ComfyUI is spotlighting community work in a live showcase featuring Matty Shimura, with a watch link shared for creators to tune in Event invite.

Vidu Q2 OnStage: submit Q2 clips for credits and cash prizes

Vidu launched the #ViduQ2OnStage challenge, inviting creators to post Q2‑powered videos for a chance at credits and cash awards Challenge details. The activity page outlines submission rules, eligibility, and prize mechanics; share your clip under the hashtag to enter Challenge page.

Vidu contest poster

Hailuo LA Immersive Gala screens community works, including “Tuned Out”

Hailuo AI and MiniMax hosted the LA Immersive Gala over the weekend, with screenings such as Sway Molina’s “Tuned Out,” signaling model‑native films getting real‑world showcases Screening recap.


🧰 ComfyUI upgrades and scene‑to‑scene tools

Open tooling advances: ComfyUI ships a subgraph parameter panel and redesigned template library; a “Mac ad” cameo boosts visibility; new LoRA and a Qwen3‑VL demo Space appear. Excludes Atlas.

ComfyUI 0.3.66 ships Subgraph Parameter Panel and a faster Template Library

ComfyUI released v0.3.66, adding a Subgraph Parameter Panel to edit widgets without diving into subgraphs, plus a redesigned Template Library with richer tags and filters for quicker workflow discovery Version update, with full details and screenshots in the write‑up ComfyUI blog. For creatives building scene‑to‑scene graphs, this trims clicks and makes sharing reusable setups simpler.

Editto’s instruction‑based video editor ships with a ComfyUI workflow and a 1M‑edit corpus

A deep‑dive today confirms Editto’s open release includes a ComfyUI workflow alongside Ditto‑1M (1M video‑edit examples, ~12k GPU‑days) for text‑guided, temporally consistent video edits—useful for quick previz (“add fog,” “make it sunset”) and marketing variants Technical blog, Release details, Open release list.

Workflow diagram

This follows up on 1M dataset, adding concrete integration paths creatives can run locally or slot into existing graphs.

‘Next Scene V2’ LoRA turns Qwen‑Image‑Edit into a cinematic next‑shot machine

A new LoRA for Qwen‑Image‑Edit (build 2509) landed on Hugging Face, purpose‑built for generating “Next Scene” transitions with coherent reframes, reveals, and camera moves from stills LoRA release, including guidance to prefix prompts with “Next Scene:” and set LoRA strength ~0.7–0.8 Hugging Face page. This gives storyboard‑to‑shot workflows a lightweight, controllable step for scene‑to‑scene continuity.

Qwen3‑VL‑2B‑Instruct Space goes live for quick VLM prototyping

A Hugging Face Space for Qwen3‑VL‑2B‑Instruct is live, giving creators a lightweight playground for image‑and‑text reasoning that can slot into boards and node graphs as a vision helper Space launch, with the interactive app one click away Hugging Face Space.

HF Space screenshot

Expect use in shot QA, prompt scaffolding, and auto‑tagging/reference extraction for scene planning.

ComfyUI pops up in Apple’s Mac ad, signaling mainstream visibility for node workflows

ComfyUI was briefly featured in a new Mac ad shared by Tim Cook, a small but notable nod that pushes the once‑niche node editor further into mainstream creator consciousness Ad cameo.

Mac ad collage

For filmmakers and designers, this kind of exposure can accelerate hiring, plugin ecosystems, and client acceptance of AI‑assisted pipelines.


🗂️ Freepik Spaces and storyboarding workflows

Freepik teases Spaces (waitlist live) while creators show Freepik‑powered storyboards for a Louvre heist recreation (Seedream 4 4K + Nano Banana). Excludes Atlas.

Freepik announces Spaces; waitlist opens for collaborative AI creative hub

Freepik teased Spaces as “a new way to think, collaborate, and create,” and opened a public waitlist for early access. For teams storyboarding, designing, and iterating with AI, this signals a shared workspace coming to the Freepik ecosystem. See the teaser and signup flow in the official posts Spaces teaser and the join page here Waitlist page, with a second prompt to enroll also circulating today Waitlist link.

Freepik AI Suite shows Veo 3.1 storyboard workflow with end‑frame control

Creators showcased Freepik’s Video Generator driving Google Veo 3.1 with structured prompts, image references, and an “End image” slot to lock closing shots—useful for planning sequences like a Louvre‑plaza heist beat. Screens illustrate prompt blocks, character refs, and the end‑frame panel, with all animations attributed to Veo 3.1 in this workflow Workflow screenshots.

Freepik Veo 3.1 UI

Following up on Prompt pack where Freepik shared extensive Sora prompts, today’s examples emphasize Freepik’s utility as a storyboarding surface for multi‑shot planning and consistent visual continuity.


💬 Ecosystem chatter: timing spats and AI summaries

Cultural beats: memes accusing OpenAI of reactionary timing vs DeepMind’s Vibe Coding, Google’s AI video summaries popping up, and voice‑first browsing demos. Excludes Atlas details (feature).

DeepMind’s Vibe Coding goes live; community jabs OpenAI for same‑day stream timing

Google DeepMind’s Vibe Coding experience is now visible in AI Studio with a clean “Build your ideas with Gemini” workflow and tiles for Veo, Maps/Search data, and fast responses, following up on Vibe tease that launch was imminent. Several creators framed OpenAI’s same‑day livestream as reactionary, sharpening a lighthearted timing spat across the ecosystem. See the promo and UI in Google’s thread Promo post and screenshots Feature screenshot, and the criticism in Timing accusation; access Vibe Coding via the build entry point AI Studio build page.

AI Studio screenshot

For AI creatives, the takeaway is less about who launched first and more that prompt‑to‑production scaffolding is consolidating in AI Studio—useful for stitching Veo, image generation, and data tools into one build surface.

Creators spot AI‑generated YouTube video summaries surfacing more broadly

Users reported seeing Google’s AI‑generated video summary card under YouTube titles on more channels, asking if a wider rollout just landed. The screenshot shows a concise auto‑summary of an Anthropic agents talk placed between title and description, suggesting increasing visibility for viewers skimming content Summary card screenshot.

YouTube summary card

If this expands, expect higher top‑of‑funnel discovery for AI tutorials and talks—and adjust titles/descriptions to align with what summaries surface.

Voice‑first browsing demo pitches engine‑agnostic workflows across Atlas and Comet

Typeless emphasized a “talk to your browser” experience that works regardless of which assistant leads—calling out compatibility with Atlas and Perplexity Comet so teams can build voice‑driven tab control and search without picking a winner today Voice‑first claim.

For storytellers and filmmakers, this points to hands‑free research and shot‑list iteration while moodboarding, with voice commands orchestrating cross‑engine browsing sessions.


🐉 Hailuo 02: scaling tests and festival vibes

Hailuo posts emphasize camera control tests, free agent effects, and event showcases, plus partner notes on fast Veo 3.1 generations. Excludes Atlas.

Hailuo 02 nails a pan that keeps every subject in frame

A fresh scaling test shows Hailuo 02 maintaining all subjects within the frame while the camera pans, signaling stronger spatial consistency and subject tracking for ensemble or action blocking Scaling test clip. For creatives, this reduces retakes when choreographing multi‑character motion and reframes.

Hailuo enables fast Veo 3.1 generations inside shot‑listed tools

Hailuo‑aligned apps now tout fast Veo 3.1 runs, tightening the iterate‑review loop for multi‑shot direction; GumVue flags speed gains and Cine Director shows a shot‑list UI with variant takes baked in Fast Veo 3.1 note, Cine Director UI. This follows Veo 3.1 support adding 8s 720/1080p with audio, and today’s emphasis is on turnaround time and production controls; creators are calling the combo “awful good” for consistency and motion Creator verdict.

Shot list UI

Hailuo agent effects: instant era swaps and cinematic themes

Creators are leaning on Hailuo’s free agent to remap a single selfie across different historical eras and to spin up themed vignettes like a Prometheus sequence, demonstrating fast, low‑friction look transfers without deep prompt wrangling Era swap demo, Prometheus showcase. Related end‑frame/nano‑banana style results are also surfacing in the wild, hinting at rapid stylistic iteration End frame example.

Dog costume effect

Hailuo LA Immersive Gala spotlights creator films with MiniMax

Hailuo’s Los Angeles Immersive Gala brought community work to a physical screen with MiniMax involvement, underscoring the model’s growing cultural footprint among filmmakers and installation artists Gala recap. For storytellers, this signals rising offline showcase demand for AI‑native shorts.


🧪 WebGL shader how‑to for designers

A practical code tutorial for motion designers: Builder.io’s Steve Sewell breaks down a WebGL fragment‑shader “paint‑reveal” effect with full code and prompts. Excludes Atlas.

WebGL paint‑reveal shader tutorial with full code

Builder.io’s Steve Sewell published a hands‑on WebGL fragment‑shader walkthrough that recreates a paint‑reveal effect, complete with code, assets, and a gentle intro to shader basics tutorial thread. The post explains per‑pixel rendering, uses a circular mask to blend two images based on mouse position, and wires uniforms from JavaScript for an interactive reveal blog post, with a follow‑up pointer for readers blog recap.

  • Implementation highlights: two textures (outline and painted), UV math, distance‑from‑mouse mask, and uniform updates for mouse and canvas size—yielding smooth GPU‑driven interactivity suitable for logo wipes, image reveals, and storyboard beats without heavy libraries.

On this page

Executive Summary
🌐 ChatGPT Atlas: AI browser for agentic web work
OpenAI launches ChatGPT Atlas, an AI‑native browser, on macOS; other platforms “coming soon”
Atlas showcases agentic browsing: “Ask ChatGPT or type a URL,” agent mode, and tab voice/text commands
Mac‑only rollout draws creator pushback; “90–95% of users excluded” at launch
Creators weigh Atlas against Perplexity Comet for agentic browsing
Pre‑event confusion over “Aura” name gives way to Atlas branding at launch
🎛️ Runway’s node Workflows and model fine‑tuning
Runway opens model fine-tuning pilot for custom video generators
Runway unveils node Workflows to chain models and modalities
🎬 Vidu Q2: ref‑to‑video, 5‑minute extends, cheap clips
Vidu Q2 ships Reference‑to‑Video and 5‑minute Video Extend
Runware adds Vidu Q2 with ~$0.03 per 2s 360p clips
Vidu’s #ViduQ2OnStage challenge opens with credits and cash prizes
🧑💻 Google AI Studio’s Vibe Coding goes live
Vibe Coding is live in Google AI Studio, with free access
Creative tiles spotlight Veo animation, video gen, and Google data hooks
Early UX verdict: a clean “Describe your idea” flow impresses creators
Google signals plan to unify AI developer tools under AI Studio
🎥 Camera paths, identity lock, and performance control
Drawn camera paths arrive in Dream Machine via Luma Ray3
Lovart + Veo 3.1 lock a spokesperson’s identity from a single reference
OmniHuman 1.5 adds precise micro‑expressions and gesture control
Hedra brings Start/End Frames to Veo 3.1 to lock openings and closings
Sora 2 Pro lands on Leonardo; creators lean on 12‑second stillness for emotion
Hailuo 02 keeps subjects framed during pans; end‑frame control looks strong
🎙️ Pro audio: noise‑free VO and accessible voices
ElevenLabs Voice Isolator adds video support with same‑format export
ElevenLabs Impact Program opens direct applications for free voices
The “leaf blower test” becomes the go‑to benchmark for noise removal
🪄 Prompt playbook: particles, plush toys, and freaks
Grok Imagine recipe: cinematic ash‑to‑particles dissolve effect
“Plushified Worlds” prompt makes velvety toy‑style 3D stills
Grok Imagine excels at romantic, expressionist, and dance‑driven moods
Halloween “freak” portraits: one prompt, endless creatures
New MJ v7 recipe (chaos 7, sref + sw 500) yields cohesive anime set
Veo 3.1 prompt blueprint for a dawn car spot
🛠️ Where to run top video models
Leonardo adds Sora 2 Pro with 12‑second single shots
Vidu Q2 goes live on Runware with ~3¢ per 2s at 360p
Runware brings top video and image models to Together Compute
Hailuo speeds up Veo 3.1 generations for creators
📅 Awards, screenings, and showcases
OpenArt MVA: Nov 16 deadline, Times Square billboards and artist shoutouts
Kling NEXTGEN Tokyo awards ceremony opens 15-seat attendee lottery
Bionic Awards AI Creator Showcase set for Dec 4 in London
ComfyUI Community Showcase with Matty Shimura goes live
Vidu Q2 OnStage: submit Q2 clips for credits and cash prizes
Hailuo LA Immersive Gala screens community works, including “Tuned Out”
🧰 ComfyUI upgrades and scene‑to‑scene tools
ComfyUI 0.3.66 ships Subgraph Parameter Panel and a faster Template Library
Editto’s instruction‑based video editor ships with a ComfyUI workflow and a 1M‑edit corpus
‘Next Scene V2’ LoRA turns Qwen‑Image‑Edit into a cinematic next‑shot machine
Qwen3‑VL‑2B‑Instruct Space goes live for quick VLM prototyping
ComfyUI pops up in Apple’s Mac ad, signaling mainstream visibility for node workflows
🗂️ Freepik Spaces and storyboarding workflows
Freepik announces Spaces; waitlist opens for collaborative AI creative hub
Freepik AI Suite shows Veo 3.1 storyboard workflow with end‑frame control
💬 Ecosystem chatter: timing spats and AI summaries
DeepMind’s Vibe Coding goes live; community jabs OpenAI for same‑day stream timing
Creators spot AI‑generated YouTube video summaries surfacing more broadly
Voice‑first browsing demo pitches engine‑agnostic workflows across Atlas and Comet
🐉 Hailuo 02: scaling tests and festival vibes
Hailuo 02 nails a pan that keeps every subject in frame
Hailuo enables fast Veo 3.1 generations inside shot‑listed tools
Hailuo agent effects: instant era swaps and cinematic themes
Hailuo LA Immersive Gala spotlights creator films with MiniMax
🧪 WebGL shader how‑to for designers
WebGL paint‑reveal shader tutorial with full code