Higgsfield делает модели изображений без ограничений на 1 год — Reve добавляет пространственную когерентность feature image for Wed, Nov 5, 2025

Higgsfield делает модели изображений без ограничений на 1 год — Reve добавляет пространственную когерентность

Executive Summary

Хиггсфилд включил на год безлимитный план для всех своих моделей изображений и запустил Reve, генератор, учитывающий раскладку. Если зарабатываете на композах как на работе, потеряв чувство меры и получив тени, отражения и глубину, которые ведут себя — это настоящий прорыв; окно закрывается 10 ноября.

Reve работает рядом с Soul, Seedream 4.0, Flux Kontext Max, Nano Banana, Face Swap, Character Swap и GPT Image, и настроен так, чтобы поддерживать согласованное освещение и перспективу, чтобы раскадровки не превращались в разбор Photoshop. До конца ноября видеоподразделение тоже без ограничений, включая Popcorn, Kling 2.5, Hailuo, Seedance и Nano Banana, плюс глобальный конкурс Global Teams Challenge на 100 тыс. долларов, чтобы подтолкнуть команды от тестов к готовым результатам.

Модель проста: безграничный доступ плюс дисциплина конвейера побеждают — спросите команду, работавшую над рождественским роликом Coca‑Cola, которая, по сообщениям, перебирала десятки тысяч клипов с помощью открытого исходного кода и потока на основе узлов. Если вы давно хотели перейти на стандартизацию по графу, сейчас отличная неделя, чтобы сделать шаг.

Feature Spotlight

Higgsfield goes UNLIMITED + Reve’s “spatially smart” layouts

Higgsfield unlocks a year of unlimited image generation, debuts Reve for coherent layouts, and kicks off a $100K Teams challenge—high‑impact access and incentives for daily creative production.

Big creator promo today: Higgsfield makes all image models unlimited for a year and drops Reve (layout‑aware image gen). Also monthly video model unlimiteds and a $100K Teams challenge. This is the day’s headline for working artists.

Jump to Higgsfield goes UNLIMITED + Reve’s “spatially smart” layouts topics

📑 Table of Contents

🆓 Higgsfield goes UNLIMITED + Reve’s “spatially smart” layouts

Big creator promo today: Higgsfield makes all image models unlimited for a year and drops Reve (layout‑aware image gen). Also monthly video model unlimiteds and a $100K Teams challenge. This is the day’s headline for working artists.

Higgsfield makes all image models unlimited for a year; Reve debuts

Higgsfield switched on a one‑year unlimited plan for all its image models with a new drop, Reve; the offer ends Nov 10 offer thread. The bundle spans Soul, Nano Banana, Seedream 4.0, Face Swap, Character Swap, Flux Kontext Max, and GPT Image, with entry points on the homepage Higgsfield site. Following up on YouTube promo 200‑credit starter, there’s a 9‑hour retweet+reply bonus worth 201 credits to get in fast offer thread.

November unlock: Higgsfield video models go unlimited and $100K Teams challenge

Through November, Higgsfield is lifting caps on Popcorn, Nano Banana, Kling 2.5, Hailuo and Seedance for unlimited video runs, and it’s dangling a $100,000 Global Teams Challenge to ship work together video unlocks. The rules confirm entries run Nov 4–Nov 30 with a $15,000 Grand Team award, five $5,000 category prizes, and twenty $3,000 picks; Teams sign‑ups before Nov 10 are highlighted, plus a 9‑hour follow/RT/reply bonus worth 209 credits today challenge rules rules link.

Reve lands: layout‑aware image generation with shadows, reflections, depth

Reve is now live on Higgsfield and promises spatially coherent image layouts—shadows align, reflections follow, depth stays—so comps look right without hand‑fixes Reve claim. It’s included in the one‑year unlimited image plan and linked for immediate use alongside Soul and Seedream 4.0 launch note Higgsfield site.


🎬 Coca‑Cola’s AI Christmas ad: scale, pipeline, tools

Multiple accounts dissect the new Coca‑Cola spot—artists involved, process threads, and claims of 70k clips refined in 30 days, with strong hints an open‑source workflow (ComfyUI) underpinned production. Excludes Higgsfield promo (covered as feature).

Coca‑Cola’s AI ad refined 70k clips in 30 days; open‑source pipeline hinted

Creators close to the project say the Christmas spot went through about 70,000 video clips made/refined in 30 days, pointing to an open‑source workflow at the core scale claim. ComfyUI amplified the tease with “Hint: it was an open source workflow tool,” fueling consensus that a node‑based, reproducible pipeline underpinned the iteration loop open source hint.

For teams, the takeaway is scale plus controllability. A graph‑driven pipeline lets you batch, version, and retry shots instead of hand‑fixing one‑offs. That’s how you get to tens of thousands of candidates without chaos. Expect more big‑brand work to standardize on auditable, open graphs so hundreds of small changes can be run overnight and reviewed by morning.

Artist on the Coke spot shares tools thread; credits Secret Level and Jason Zada

Christopher Fryant confirmed he was one of the artists on the Coca‑Cola commercial, crediting Secret Level and director Jason Zada, and pointed people to a follow‑up on “which tools were used” and what went into the job artist note tools thread.

This matters because it’s a first‑party breadcrumb trail for anyone trying to reconstruct the stack. When an artist invites “tools” questions, it usually means the pipeline isn’t a single button press but a mesh of models, trackers, compositing, and review gates. If you’re building your own, look for places to log prompt params, asset hashes, and per‑shot notes so your post team can actually trace cause → effect.

Breakdown argues the Coke spot still needed story, design and real animation

A creator breakdown pushed back on the “AI made it easy” narrative, saying the Coca‑Cola ad required the full stack—story, design, illustration, animation—with AI as the amplifier, not a shortcut breakdown thread.

So what? If you’re staffing an AI‑assisted production, budget for pre‑viz, style frames, and motion passes the same way you would without AI. The models buy you exploration speed and variant throughput. They don’t replace the beats, the blocking, or the taste that holds a sequence together.


🎥 Shot control upgrades: LTX‑2 prompts, in‑flow camera moves

Filmmakers get better steering: LTX‑2 publishes a concise prompt guide and fresh 20s tests, Runway demoed multi‑model Workflows, and creators show Google’s quiet in‑flow camera motion edits on generated video.

Creators demo Google’s in‑flow camera moves on generated videos

Multiple demos show Google quietly enabling camera position and motion changes on already generated videos within a flow—think post‑hoc push‑ins, reframes, and new paths without re‑rolling the scene Camera motion demo. Early examples claim “no cherry‑picking,” and a follow‑up thread adds more tests, hinting at practical control for reshoots and coverage passes inside the same generation More examples.

LTX‑2 drops a concise prompt guide for cinematic shots

LTX‑2 published a quick guide on writing prompts that translate into camera language—framing, lens, and movement—for more predictable cinematic results Prompting guide. The post is a compact reference for filmmakers trying to steer shot composition without overlong prompts, and the team amplified it for visibility Guide PSA. See the full write‑up in the linked post prompting guide.

LTX‑2 Fast shows 20‑second continuous shot tests in the wild

Following up on 20s one‑takes from the official rollout, creators are now posting 20‑second continuous LTX‑2 Fast shots, indicating stable long‑take behavior beyond curated reels 20‑second test. The model account boosted the clip, suggesting it reflects current public settings rather than a private build Model boost.

Runway Workflows chains models and steps for finer shot control

Runway highlighted Workflows that let you combine multiple models, modalities, and steps in one place—useful for locking story beats, then routing to stylization, upscaling, or audio without leaving the pipeline Workflows episode. For small teams, this reduces hand‑offs and keeps shot intent consistent across edits and exports.


🧪 Realtime video and subject consistency (papers & demos)

Today’s research feed leans toward live gen and coherence: Adobe MotionStream runs near‑real‑time on H100, ByteDance’s BindWeave tackles subject consistency, plus PercHead 3D heads and StyleSculptor zero‑shot 3D styling.

ByteDance’s BindWeave proposes subject‑consistent multi‑entity video generation

A ByteDance research team introduced BindWeave, combining an MLLM with a diffusion transformer to reason about spatial/temporal relations and keep characters consistent across complex scenes, beating OpenS2V on naturalness, consistency, and text relevance paper summary. For filmmakers and animators, this points to fewer continuity breaks when scenes have multiple interacting subjects.

PercHead turns a single image into a 3D‑consistent, editable head

Niessner’s team unveiled PercHead: single‑image 3D head reconstruction with disentangled geometry vs style edits, built on DINOv2 and SAM 2.1 and trained with a new perceptual loss for higher fidelity model explainer. This helps character pipelines keep identity across angles while letting artists tweak shape and look independently.

Google quietly tests post‑generation camera moves on video in flow

Community demos show changing camera position and motion on an already generated video, mid‑flow—precisely the control many expected from Runway’s Aleph demo thread first attempts. If this ships broadly, editors gain a redo knob for camera language without regenerating entire shots.

StyleSculptor enables zero‑shot style‑controlled 3D assets with dual guidance

StyleSculptor proposes texture‑geometry dual guidance and a style‑disentangled attention module to inject style without leaking content, producing stylized 3D assets in a training‑free (zero‑shot) setup paper link Paper page. For game and XR art teams, it promises faster look development without retraining per style.

Qwen Image Edit adds full camera control via multi‑angle LoRA

Creators can now rotate the camera, tilt from bird’s‑eye to worm’s‑eye, and change lens width inside Qwen Image Edit using a specialized LoRA, improving angle‑accurate re‑frames and scene coherence in stills feature test Hugging Face space. This is useful for previsualization boards and consistent product angles before moving to video.


🤝 Assistant ecosystem power plays (Apple–Google, Snap, Amazon)

Business and governance that affect creative tools: Apple nears a ~$1B/yr deal to use Google’s 1.2T‑param Gemini for Siri on PCC; Perplexity to power Snapchat’s default AI; Amazon sends Perplexity a C&D over Comet agents.

Apple taps Google’s 1.2T‑param Gemini for new Siri, ~$1B/yr on PCC

Apple is nearing a deal to run a new Siri on Google’s 1.2T‑parameter Gemini, paying roughly $1B per year, with inference on Apple’s Private Cloud Compute and a planned rollout tied to iOS 26.4 next spring Bloomberg summary. The upgrade targets summarization and planning, while Apple readies its own ~1T‑parameter cloud model and keeps China on in‑house models with an Alibaba filter Bloomberg summary.

Bloomberg article screenshot

The partnership reportedly won’t be marketed publicly, and it remains separate from broader Gemini chatbot or Safari search negotiations—this looks like a pragmatic, behind‑the‑scenes swap to boost assistant quality without sending user data to Google Bloomberg report.

Perplexity to become Snapchat’s default in‑app AI from January 2026

Perplexity says it will power Snapchat’s default AI experience across the app starting January 2026, signaling a major distribution win and a direct line into creative, chat, and search flows inside Snap Announcement image. For creators, that means a built‑in assistant where audiences already are, not a separate destination.

Perplexity–Snap graphic

The move also raises competitive pressure on OpenAI and Google inside consumer messaging and UGC platforms—default placement tends to drive usage, prompt formats, and downstream tool adoption Announcement image.


🛰️ Model watch: Gemini 3 tiers, Kimi K2 tease

Model pipeline signals for planners: Vertex AI screens show Gemini 3 Pro Preview with 200k and 1M context tiers; a code diff hints a ‘Kimi K2 Thinking’ reasoning parser arriving soon.

Gemini 3 Pro Preview shows 200k and 1M context tiers in Vertex AI

Vertex AI now lists Gemini 3 Pro Preview with two context tiers—tier‑200k (200,000 tokens) and tier‑1m (1,000,000 tokens)—plus per‑modality ratio settings, following Nov 18 window deprecation clues pointing to a near‑term Gemini 3 rollout Vertex AI screen. For creators, a 1M window means entire scripts, boards, references, and notes can live in one prompt for coherent revisions.

Gemini 3 tiers screenshot

Code diff hints 'Kimi K2 Thinking' reasoning parser replacing DeepSeek R1

A repository change swaps DeepSeekR1ReasoningParser for KimiK2ReasoningParser, signaling a pending “Kimi K2 Thinking” reasoning model integration. That implies a new candidate to test for long‑chain writing, edit plans, and multi‑step briefs in creative pipelines Parser code diff.

Kimi K2 parser diff


📚 Research copilots wired into your workspace

Practical research UX updates: Gemini Deep Research now pulls from Gmail, Drive, Search, and Chat with solid email threading; ChatGPT adds the ability to keep feeding context mid‑query.

Gemini Deep Research hooks into Gmail, Drive, Search and Chat

Google’s Gemini Deep Research now pulls from four workspace sources—Gmail, Drive, Search and Chat—to synthesize threads, extract the latest emails, and surface summarized findings in one place feature demo. This matters if you live in docs and inboxes: it auto-threads conversations, flags newest messages, and keeps you in a single research view instead of hopping across tabs.

Gemini research UI

For creative teams, this turns messy project mail into a structured brief you can act on faster. It also looks fast enough to be useful mid‑meeting, not just for end‑of‑day recaps.

ChatGPT now accepts added context mid‑query

OpenAI shipped a quality‑of‑life update for ChatGPT that lets you keep adding files or details to an ongoing query without starting over feature note. Small change, big friction cut: you can refine a research thread as stakeholders chime in, rather than spawning new chats and losing continuity.

For designers and producers, this makes iterative briefs, reference swaps, and scope clarifications feel closer to a live working doc—one thread, evolving context.


🍿 Creator premieres and platform collabs

Lighter but notable drops: Leonardo × pzf_ai announce a mini‑film series that evolves with each VEO release; Vidu‑powered motivational short shared. Excludes Coke ad (covered separately).

Leonardo × pzf_ai launch a Veo 3.1–made mini‑film series that evolves with each release

Leonardo and filmmaker @pzf_ai announced a collaborative mini‑film series that will add new chapters every time VEO ships an update, with the first installment created on Leonardo using Veo 3.1 Leonardo announcement and framed by the creator’s own rollout note Creator note. For working filmmakers, this is a useful cadence: align story beats to model leaps, keep style continuous, and grow an audience between drops.

Overhead laundry shot

The shared still underscores shot design and continuity—exactly what teams weigh when committing to AI‑assisted series work Frame still.

Vidu shares a new community‑made motivational short by KanaWorks

ViduAI highlighted a short film produced by @KanaWorks_AI (tagged for Vidu Q2), signaling ongoing platform‑creator collaboration and a steady cadence of community premieres to study for pacing, tone, and model strengths in shortform Vidu showcase.


🛠️ New edit controls: Qwen camera angles, Comfy TTS node

Editors get finer control: Qwen Image Edit adds camera rotation/tilt/lens changes via a multi‑angle LoRA; ComfyUI gains a Maya1 TTS node (3B) for expressive voices inside workflows.

Qwen Image Edit adds camera rotation, tilt and lens control via multi‑angle LoRA

Qwen Image Edit now exposes full camera control for stills: rotate the view, tilt between bird’s‑eye and worm’s‑eye, and vary lens width, all driven by a specialized multi‑angle LoRA. Early hands‑on shows coherent subject and scene geometry as you sweep angles, which is the hard part for layout consistency Feature brief. A working demo is live for quick trials Hugging Face space.

Why it matters: shot design gets faster. You can keep a composition and iterate on perspective without re‑prompting or rebuilding assets. This helps boards, product shots, and stylized portraits where layout must stay put while the camera moves.

ComfyUI gets Maya1 TTS node: 3B expressive voice synthesis inside workflows

ComfyUI picked up an open‑source node for Maya1 TTS, a 3B‑parameter model aimed at emotion‑rich, controllable voices, so you can render narration and character lines directly in your Comfy graphs Release note. The node’s code and install steps are on GitHub under Apache‑2.0, making it easy to slot into existing pipelines GitHub repo.

Why it matters: animatics, trailers, and dialogue passes no longer need an external TTS round‑trip. You can version voice takes alongside shots, batch renders, and export clean audio in the same run.


📣 Automating brand videos and fashion showcases

Tools aimed at marketers and creators: Pictory adds 150+ Instant Templates and a ‘Summarize Video’ feature; Apob AI blends Virtual Try‑On with AI influencer generation for real‑time fashion promos.

Pictory adds 150+ Instant Templates; highlights tool auto-cuts long videos

Pictory introduced Instant Templates with 150+ ready‑to‑use designs directly in the dashboard, aimed at speeding up on‑brand edits for marketers and social teams Launch post with setup available via the app Pictory app. The same thread also points to a Summarize Video feature that turns long webinars into short, scroll‑ready highlights in seconds Launch post.

Templates chooser UI

For brand and content teams producing daily socials, this cuts the layout grind and turns existing long‑form assets into snackable clips without a timeline pass.


🎨 Today’s style packs and srefs (MJ, Grok)

A steady stream of look recipes for artists: Grok Imagine excels at villainous ‘evil smiles’; MJ V7 token sets and srefs for 3D characters, kinetic portraits, and abstract sculpture looks.

MJ V7 kinetic light‑trail look: sref 1299354853 with exp 15, sw 500

A clean, kinetic light‑trail recipe is circulating for MJ V7 portraits and action frames: --ar 3:4 --exp 15 --sref 1299354853 --sw 500 --stylize 500 Prompt recipe. It translates across subjects—from a glowing horse profile to a wet‑skin boxer punch‑up Boxer example.

Light‑trail horse

Boxer action look

• Start with low chaos; nudge --stylize 400–700 to balance light streak detail vs subject fidelity.

MJ V7 srefs for cute 3D characters: ready-to-use set

Azed shared a plug‑and‑play sref pack for stylized “cutest 3D characters,” covering outfits, jackets, headphones, and props—ideal for toy lines, mascots, or channel art Style set.

3D character set

• Swap colors and jacket patches per render; keep the sref fixed to preserve geometry and material feel.

Abstract layered “paper sculpture” head: reusable token look in MJ

Azed posted a reusable token look for an abstract, layered “paper sculpture” bust—great for poster heads, gallery cards, and cover art Token look. Lock the token, then iterate hair flow and lighting direction to control the ripple contours.

Paper sculpture head

Grok Imagine nails villain “evil smiles” for character design

Creators are calling out Grok Imagine’s knack for sinister, readable “evil smiles” that sell a villain in close‑ups and key art Grok Imagine note. This follows sword duel prompt that showed Grok’s cinematic range; now you’ve got a reliable facial‑expression look to slot into character sheets and posters.


🗓️ Community highlights: contests, office hours, shows

A few community anchors today: Comfy Challenge #8 Halloween winners posted; Midjourney hosts dev‑led Office Hours; GLIF drops an episode on making viral SORA content. Excludes Higgsfield’s contest (feature).

Comfy Challenge #8 (Halloween) winners announced with Top 3 highlights

ComfyUI posted the Halloween edition results: the winner and Top 3 will receive Halloween candy and gifts, with spotlight threads linking to each pick Winners announced. The team also shared community highlights and individual Top 3 submissions so creators can study pacing, shot choices, and workflows Community highlights Top 3 highlight Submission link.

GLIF drops The AI Slop Review episode on going viral with SORA 1 & 2

GLIF released a new episode focused on making viral content using SORA 1 and 2, featuring creator Bennett Waisbren Episode link, following up on Episode time where the stream slot and guest were set. Expect concrete breakdowns of hook framing, clip length, and iteration loops that teams can test today.

Midjourney is running a special Office Hours with developers leading the call while David is off, and they’re taking questions via a submission form Dev-led note Question form. The session room is open, so if you’ve got V7 issues, srefs, or workflow requests, join live and bring examples Office hours room Office hours room.

On this page

Executive Summary
🆓 Higgsfield goes UNLIMITED + Reve’s “spatially smart” layouts
Higgsfield makes all image models unlimited for a year; Reve debuts
November unlock: Higgsfield video models go unlimited and $100K Teams challenge
Reve lands: layout‑aware image generation with shadows, reflections, depth
🎬 Coca‑Cola’s AI Christmas ad: scale, pipeline, tools
Coca‑Cola’s AI ad refined 70k clips in 30 days; open‑source pipeline hinted
Artist on the Coke spot shares tools thread; credits Secret Level and Jason Zada
Breakdown argues the Coke spot still needed story, design and real animation
🎥 Shot control upgrades: LTX‑2 prompts, in‑flow camera moves
Creators demo Google’s in‑flow camera moves on generated videos
LTX‑2 drops a concise prompt guide for cinematic shots
LTX‑2 Fast shows 20‑second continuous shot tests in the wild
Runway Workflows chains models and steps for finer shot control
🧪 Realtime video and subject consistency (papers & demos)
ByteDance’s BindWeave proposes subject‑consistent multi‑entity video generation
PercHead turns a single image into a 3D‑consistent, editable head
Google quietly tests post‑generation camera moves on video in flow
StyleSculptor enables zero‑shot style‑controlled 3D assets with dual guidance
Qwen Image Edit adds full camera control via multi‑angle LoRA
🤝 Assistant ecosystem power plays (Apple–Google, Snap, Amazon)
Apple taps Google’s 1.2T‑param Gemini for new Siri, ~$1B/yr on PCC
Perplexity to become Snapchat’s default in‑app AI from January 2026
🛰️ Model watch: Gemini 3 tiers, Kimi K2 tease
Gemini 3 Pro Preview shows 200k and 1M context tiers in Vertex AI
Code diff hints 'Kimi K2 Thinking' reasoning parser replacing DeepSeek R1
📚 Research copilots wired into your workspace
Gemini Deep Research hooks into Gmail, Drive, Search and Chat
ChatGPT now accepts added context mid‑query
🍿 Creator premieres and platform collabs
Leonardo × pzf_ai launch a Veo 3.1–made mini‑film series that evolves with each release
Vidu shares a new community‑made motivational short by KanaWorks
🛠️ New edit controls: Qwen camera angles, Comfy TTS node
Qwen Image Edit adds camera rotation, tilt and lens control via multi‑angle LoRA
ComfyUI gets Maya1 TTS node: 3B expressive voice synthesis inside workflows
📣 Automating brand videos and fashion showcases
Pictory adds 150+ Instant Templates; highlights tool auto-cuts long videos
🎨 Today’s style packs and srefs (MJ, Grok)
MJ V7 kinetic light‑trail look: sref 1299354853 with exp 15, sw 500
MJ V7 srefs for cute 3D characters: ready-to-use set
Abstract layered “paper sculpture” head: reusable token look in MJ
Grok Imagine nails villain “evil smiles” for character design
🗓️ Community highlights: contests, office hours, shows
Comfy Challenge #8 (Halloween) winners announced with Top 3 highlights
GLIF drops The AI Slop Review episode on going viral with SORA 1 & 2
Midjourney hosts dev-led Office Hours; questions link and room live