
EpochAI report shows 10× LLM inference price drop – GLM 4.7 ascends
Stay in the loop
Free daily newsletter & Telegram daily report
Executive Summary
EpochAI’s year-end review quantifies the AI cost curve: LLM inference prices fell >10× between Apr 2023 and Mar 2025 at comparable performance; some tasks—like GPT‑4‑level GPQA—see ~900× annual price declines. Installed NVIDIA AI compute has doubled every ~10 months since 2020. Epoch estimates OpenAI spent ~$4.5B on experimental runs, ~$400M on GPT‑4.5 training, and ~$2B on inference, and pegs a GPT‑4o chat at ~0.34Wh. DeepSeek v3 is flagged as reaching frontier‑class scores with ~10× less training compute than Llama 3, underscoring efficiency gains even as spending keeps rising.
• Open weights: Zhipu’s GLM 4.7 jumps 15 spots to #2 on Website Arena and top open‑weight; MiniMax‑M2.1 arrives on Hugging Face for community runs.
• Training recipes: GTR‑Turbo merges a VLM’s own checkpoints into a “free teacher” for RL; ThinkARM collapses math traces into four modes—Analysis, Explore, Verify, Reflect.
• Jobs and funding: Guardian-linked analysis warns AI could erase ~50% of entry‑level white‑collar roles and push US unemployment toward 10–20%; Databricks CEO Ali Ghodsi calls billion‑dollar, zero‑revenue AI startups a bubble.
Top links today
Feature Spotlight
Kling 2.6 Motion Control: orientation modes go pro
Creators detail how Match VIDEO vs Match IMAGE changes motion vs framing fidelity in Kling 2.6, backed by new head‑to‑head tests and polished action workflows—turning Motion Control into a director’s tool, not a demo toy.
Today shifts from flashy reels to craft: creators unpack Kling 2.6’s two Orientation Match modes and where each wins, with fresh fight/anime stress‑tests and a partner nod. Excludes Seedance AV, covered separately.
Jump to Kling 2.6 Motion Control: orientation modes go pro topicsTable of Contents
🎭 Kling 2.6 Motion Control: orientation modes go pro
Today shifts from flashy reels to craft: creators unpack Kling 2.6’s two Orientation Match modes and where each wins, with fresh fight/anime stress‑tests and a partner nod. Excludes Seedance AV, covered separately.
Kling’s Character Orientation Match VIDEO vs IMAGE gets a detailed field guide
Kling 2.6 Motion Control (Kling): Creator Ozan Sihay breaks down the two Character Orientation Match modes—VIDEO vs IMAGE—and shows where each one holds up or breaks, giving filmmakers and animators a more technical way to pick settings rather than guessing orientation explainer. Character Orientation Matches Video prioritizes the movement itself (dance, running, complex body mechanics), maintaining motion flow and physical plausibility from the reference clip, while Character Orientation Matches Image locks onto composition, camera angle, lens relationship, and perspective when that single frame matters more than the exact motion path orientation explainer. He also flags a key limitation for both: Kling cannot preserve the full frame when parts of the body or scene are missing from the reference, so it hallucinates unseen areas and struggles to keep the original crop perfectly intact in either mode orientation explainer.

The point is: Kling is essentially asking whether to imitate motion or preserve pose, and this walkthrough shows that choice clearly for people building choreographed or camera‑driven shots.
Creator head-to-head tests say Kling 2.6 Motion Control beats rival video models
Kling 2.6 Motion Control (Kling): Creator Rixhabh runs Kling 2.6 Motion Control head to head against other video models using the same prompt and the same reference material, and the official Kling account amplifies his claim that Kling "outperformed" the alternatives on this setup model comparison note. The test focuses on Motion Control specifically—matching body movement and expressions from a reference onto a generated character—rather than generic text‑to‑video, highlighting that Kling’s strengths show up most when strict motion fidelity and identity transfer are required model comparison note.
The evidence is still anecdotal and lacks shared benchmark clips or metrics, but it adds to a pattern of practitioners positioning Kling 2.6 as the default for performance‑driven character animation workflows rather than wide‑open, single‑prompt clips.
Anime kraken fleet attack stresses Kling 2.6 with large-scale sea destruction
Kraken ocean attack (Artedeingenio + Kling): Artedeingenio uses Kling 2.6 to realize a long‑imagined anime set‑piece—a giant kraken attacking an entire fleet—pushing Motion Control into chaotic, large‑scale destruction rather than single‑character shots kraken prompt. The prompt calls for tentacles crushing wooden hulls, masts snapping, sails tearing, huge water explosions, and exaggerated camera shake that tracks the violence of each impact, and the resulting clip shows Kling coordinating all of that into a coherent, cinematic sequence rather than a static tableau kraken prompt.

For creators, this test suggests Kling 2.6 can handle multi‑object interactions, heavy FX motion, and aggressive virtual camera movement when the scene leans into anime exaggeration rather than strict realism.
Batman vs Venom workflow shows Kling 2.6 in a comic-style fight pipeline
Batman vs Venom pipeline (Artedeingenio + Kling): Artist Artedeingenio details a full comic‑style fight workflow where Batman and Venom are rendered separately in a Midjourney comic style, composited into one image using Nano Banana Pro, then animated into dynamic combat using Kling 2.6 before a final edit in CapCut batman workflow. Kling’s Motion Control handles the high‑action phase of the pipeline, taking a static Nano Banana composition and driving it with a "very dynamic combat" prompt to produce kick‑and‑lunge sequences that feel like superhero choreography rather than simple camera pans batman workflow.

The workflow shows how comics‑style illustration, image compositing, and Kling’s Motion Control can be chained together so each tool focuses on what it does best: look, layout, and then motion.
Freepik calls Kling 2.6 Motion Control a “partner in crime” for creators
Kling Motion Control distribution (Kling + Freepik): Freepik publicly refers to Kling as its "partners in crime" while boosting Kling’s announcement that the 2.6 Motion Control feature is live, supports complex moves with full‑body and expression mapping, and works from one character image plus one motion reference video for up to 30‑second generations on all plans partners comment. Kling’s own thread copy, quoted in the context, emphasizes tracking face expressions, body motion, and hand moves from the reference into the generated clip, positioning Motion Control as a precise animation layer rather than a generic effect partners comment.
The exchange signals that Kling’s Motion Control is not only an API or standalone app feature but is also woven into creative platforms like Freepik, making it easier for designers and video teams already in those ecosystems to tap performance‑driven character animation without leaving their usual tooling.
🎬 Seedance 1.5 Pro: native AV scenes and platform pickup
Continues the AV momentum with Freepik availability and creator clips showing multilingual lip‑sync and multi‑shot storytelling. Excludes Kling 2.6 Motion Control (see feature).
Dreamina showcases nine Seedance 1.5 Pro templates from war epics to horse ASMR
Seedance 1.5 Pro use cases (Dreamina): Dreamina_ai compiled nine short demos to show Seedance 1.5 Pro handling everything from tense couple arguments and romantic forest proposals to fantasy war trailers, surreal vegan comedy, horse ASMR, vintage interviews, dragon POV flights, and gothic vampire stories with native audio and lip-sync, building on the earlier emotional Harry Potter short in native audio test.

• Genre and tone range: The thread spans realistic two-person dialogue in a dimly lit kitchen Couple argument scene, a quiet forest proposal at night Forest proposal demo, an absurd vegan drama with food-inspired character designs Vegan drama clip, and a cinematic fantasy war trailer with soldiers, dragons, and war machines Fantasy war trailer.
• Audio as first-class output: Several clips foreground environmental and foley sound, such as brushing and snorts in the horse-grooming ASMR scene Horse ASMR sample and retro-mic vocal color in the vintage style "how did you imagine the future" interview Vintage interview video.
• POV and horror experiments: The set also includes a first-person dragon flight where wind and wing beats dominate the soundscape Dragon POV test and an AI-animated gothic horror short about a vampire duke and a woman in a dark manor, complete with moody score and dialogue Gothic horror clip.
Seedance 1.5 Pro lands on Freepik with native audio video generation
Seedance 1.5 Pro (Freepik/BytePlus): Freepik announced that the Seedance 1.5 Pro video model is now available on its platform, describing it as the first Seedance release that generates film-grade visuals and synchronized audio—sound effects, multilingual dialogue, and native lip-sync—in a single pass in the Freepik launch; this extends distribution beyond BytePlus’s own APIs, following up on ModelArk API where Seedance 1.5 Pro was exposed for text, image, and multi-shot video generation.

Freepik also pushed a "Start creating now" call-to-action that links directly into a Seedance creation flow, which positions the model as a turnkey storytelling engine for its existing creator base rather than a specialist tool that only engineers can reach by wiring APIs Freepik CTA.
Seedance 1.5 Pro on OpenArt powers anime-style sunset train vignette
Anime narrative clip (OpenArt/Seedance): Creator azed_ai showed Seedance 1.5 Pro running on OpenArt (spelled "seedande" in the post) for a short anime-style scene where a teenage girl waits on a quiet train platform at sunset, a train rushes past, and a close-up captures her lips moving in sync with a Japanese voiceover line about choosing different paths Anime station prompt.

The prompt specifies shot types—wide establishing shot, cut to the passing train, then a close-up for the voiced line—which highlights Seedance 1.5 Pro being used as a director-style multi-shot storyboard tool rather than a single static clip generator, extending the structured narrative prompting explored in director toolkit.
🖼️ Reusable looks: noir, editorial sketch, and glow‑line kids’ art
Heavy day for still styles and prompts—multiple Midjourney srefs, a Leonardo 3×3 emotions brief, and whimsical glow‑line children’s drawings for quick campaign aesthetics. Excludes Seedance AV and Kling tests.
Glow‑line children’s drawing prompt standardizes whimsical minimalist outlines
Glow‑line kids prompt (azed_ai): Azed_ai publishes a reusable prompt template for “minimalist children’s drawing” images built from thick white glowing lines over softly blurred environments, plus ATLs showing multiple subjects and settings in the prompt share.
The pattern keeps full‑body figures extremely simple—a dancing girl and cat with floating notes, a girl skipping rope in falling petals, a child with umbrella amid rain and fish, and a balloon‑holding kid framed by stars—while backgrounds stay photographic but out of focus prompt share. Floating elements like petals, stars, or musical notes add motion, so this prompt can be reused for quick social posts, kids‑themed brand moments, or lightweight motion graphics where glowing doodles sit over live‑action plates.
Leonardo 3×3 portrait grid brief isolates emotion with zero identity drift
3×3 emotion grid spec (Leonardo): Azed_ai shares a detailed prompt for generating a 3×3 cinematic portrait grid in Leonardo where the same woman appears in all nine frames with zero identity drift, each cell showing a different emotional state using only facial expression, as specified in the emotion grid spec.
• Identity and control: The brief demands identical facial structure, age, hairstyle, clothing, makeup, camera, lighting, and background across the grid—no pose or styling shifts allowed—so only micro‑expressions separate calm, happiness, determination, concern, anger, sadness, and strength emotion grid spec.
• Cinematic framing: It locks in an 85mm portrait lens, shallow depth of field, neutral blurred background, and consistent studio lighting with subtle film grain, framing this as an editorial or film casting tool rather than casual portraits emotion grid spec.
For character designers and directors, this kind of spec becomes a reusable template for testing emotional range on a single design before committing to storyboards or animation.
Fog‑teal Midjourney sref 5431156294 defines eerie horned‑mask world
Fog‑teal narrative style (Midjourney): Azed_ai debuts Midjourney style reference --sref 5431156294, a cohesive visual world built around dense teal fog, pale yellow garments, and quietly surreal elements like horned skull masks, deer companions, and lone houses glowing in the distance, as shown in the fog style ref.
Across examples—masked figure in tall grass, a girl beside a stag and car headlights, a child on a misty beach with birds, and a woman facing a lit house amid yellow flowers—the palette stays constrained to cyan haze, dark ground, and soft yellow accents, which helps sequences feel like a single film universe fog style ref. The retweeted recap reinforces that this sref is meant as a reusable mood kit for eerie rural stories, slow horror, or poetic brand films rather than one‑off hero images style recap.
Illustrative neo‑noir sref nails hard light and nervous lines
Neo‑noir cinematic style (Midjourney): A new Midjourney style reference --sref 2487794224 focuses on loose, expressive neo‑noir illustration with visible “nervous” lines and very strong chiaroscuro—hard side or back light cutting through darkness for crime, thriller, or vampiric themes as described in the noir style ref.
The sample images show women in red dresses, side‑lit faces, and an action pose firing a handgun, all rendered with sketchy strokes and heavy shadows that carve out faces and props while backgrounds almost dissolve noir style ref. This sref targets narrative portraits and tense interiors where light direction, not detail overload, carries mood, which suits concept frames, poster explorations, or animatic boards that need a gritty noir read at a glance.
Narrative sketch Midjourney sref brings editorial graphic‑novel feel
Narrative sketch style (Midjourney): Artedeingenio shares Midjourney style reference --sref 3200894182 for narrative sketch illustrations that sit between editorial art, graphic novels, and illustrated books, emphasizing that every frame should feel like part of a larger story in the style reference.
The examples show loose but confident line work, flat color blocking, and character‑centric compositions—ranging from a stylish woman in a fur‑collared jacket to a bearded sailor and a racing jockey—making this sref useful for character‑driven vignettes and light storyboards rather than hyper‑rendered key art style reference. For illustrators and directors, the look leans on expressive faces, simple backgrounds, and implied environments, so scenes read quickly even when used as thumbnails or sequence beats.
Nostalgic 1950s–70s kids’ book sref lands soft pencil melancholy
Nostalgic children’s style (Midjourney): Artedeingenio introduces Midjourney style reference --sref 1859327544 that mimics French, Scandinavian, and British children’s books from the 1950s–70s—soft colored pencil, everyday domestic scenes, and a slightly melancholic, intimate tone, as outlined in the kids style ref.
The examples cover a backpacked boy outside a distant house, kids huddled over a jar in a kitchen, a child surrounded by dogs, and a balloon‑chasing toddler under heavy blue pencil skies, all with visible stroke texture and limited palettes kids style ref. The style leans more toward quiet narrative moments than hyper‑cute mascots, which positions it well for picture‑book concepts, reflective campaign spots, or brand storytelling that wants a retro, slightly bittersweet feel.
🛠️ Board‑to‑screen pipelines: Firefly Boards, Story tabs, and code‑assist
Creators show end‑to‑end ideation inside Adobe Firefly Boards, organize edits in Pictory Story, and even prototype with Google Search AI Mode writing runnable code. Excludes Kling and Seedance coverage.
Firefly Boards links concept, animation and upscaling with free images to Jan 15
Firefly Boards (Adobe): Adobe Firefly Boards is being used as an end‑to‑end pipeline where creators move from rough concept to scene variations, then to animation and sound design without leaving the tool, according to the Firefly ambassador walkthrough in the Firefly Boards demo; image generation inside Boards remains free and unlimited until 15 January, while VEO 3.1 animation and Topaz upscaling run directly in the same workspace as detailed in the workflow breakdown.

• Concept to visual direction: The creator starts from a written concept, explores alternate directions with Firefly 4, and swaps objects and backgrounds inside Boards instead of bouncing between separate apps, as shown in the Firefly Boards demo.
• Integrated animation and upscaling: After selecting key frames, they animate them with VEO 3.1, then upscale the resulting clips using Topaz all from within Firefly, which avoids exporting/importing steps and keeps a single visual timeline, according to the workflow breakdown.
• Short‑term free tier: The thread notes that image generation is still free and unlimited in Firefly Boards until 15 January 2026, which materially lowers experimentation costs for storyboarding and look‑dev runs during that window in the workflow breakdown.
The setup positions Firefly Boards as more of a lightweight production environment than a static moodboard, particularly for small teams assembling short branded or narrative pieces.
Google Search AI Mode writes runnable generative‑art code from natural queries
Search AI Mode (Google): A creator demo shows Google’s Search AI Mode taking a plain‑language request like “explain how to build a generative art layout with p5.js” and responding with both a conceptual explanation and full runnable code, which is then copy‑pasted into a browser and executed to produce an abstract animated sketch, as seen in the Search AI demo.

The flow positions Search AI Mode as a lightweight code‑assist and tutorial environment for visual coders: it explains the idea using diagrams and text, outputs a structured p5.js sketch with setup and draw functions, and effectively bridges from search query to on‑screen motion graphics without needing a separate IDE, a pattern reinforced by the author’s retweet commentary in the Search AI retweet.
Pictory’s Story tab promotes scene‑by‑scene control for AI‑cut videos
Story tab (Pictory): Pictory is highlighting its Story tab as the place where creators can rearrange scenes, adjust pacing, and update on‑screen elements like logos and lower‑thirds in AI‑generated videos, framing it as the control room for turning rough cuts into structured narratives in the Story tab post.
The promo graphic shows a scene timeline with stacked thumbnails and an editing canvas where a presenter’s name and title card, as well as a logo, can be repositioned and resized, supporting workflows where brands need consistent identity across many auto‑cut clips while still being able to re‑order or drop scenes late in the process.
🎚️ After the shot: identity modify, mocap to MetaHuman, clean plates
Finishing stack gets practical with Ray3 Modify identity swaps in Dream Machine, Flow Studio’s MetaHuman capture/export, and instant background removal. Excludes motion control features.
Flow Studio details capture-to-MetaHuman pipeline for gameplay-ready animation
MetaHuman capture (Autodesk Flow Studio): Autodesk’s Flow Studio showcases a full workflow for capturing a real actor’s performance and exporting it as MetaHuman-ready motion, with MetaHuman Animation Support available on Standard, Pro, and Enterprise tiers according to the MetaHuman overview.

The short vertical demo cuts between an actor in a mocap suit and a digital character in-engine faithfully mirroring body and facial movement, while UI shots of Flow Studio’s timeline and export tools hint at a pipeline that takes capture data directly into Unreal’s MetaHuman rig MetaHuman overview. The linked walkthrough on YouTube is positioned as a step-by-step guide for this process, suggesting that teams working on games or real-time cinematics can keep a single stack for both live-action-driven performances and final MetaHuman animation, with no extra middleman tools required for retargeting YouTube workflow.
Ray3 Modify now applies custom character identities inside Dream Machine
Ray3 Modify (LumaLabsAI): Luma highlights that Ray3 Modify can now apply a custom character identity to a Dream Machine clip, extending earlier scene and season transforms into full-on identity replacement for looping videos, as teased in the Ray3 Modify post.

The demo shows a neon city loop reprojected onto a muscular cybernetic character with glowing eyes, indicating that Ray3’s loop feature is not just for camera or environmental changes but can also preserve motion while swapping who appears on screen Ray3 Modify post. For creatives building recurring characters—like mascots, VTubers, or narrative leads—this points to a workflow where one good base loop can be repurposed multiple times by changing identity rather than rerunning full generations, building on the earlier season and transition experiments covered in the season swap demo.
WaveSpeedAI ships instant video background remover with no green screen
Video Background Remover (WaveSpeedAI): WaveSpeedAI announces a Video Background Remover that promises instant background cuts with clean edges and transparent outputs, explicitly removing the need for a green screen setup as described in the background remover post.
The tool is framed as a one-click way to isolate a subject from any environment, advertising fast processing and edge fidelity suitable for overlays, compositing, or swapping virtual sets without traditional keying work background remover post. For solo creators and small studios, this positions WaveSpeedAI as a practical “clean plate” step after generation or filming, slotting between raw footage from AI video models or cameras and the final edit where backgrounds, branding, or motion graphics are added.
🧩 Identity that holds: 3×3 grids and one‑click upscales
Higgsfield Cinema Studio examples focus on consistent characters and locations from a single reference with quick upscaling of favorite frames. Excludes Kling Motion Control identity transfer.
Higgsfield Cinema Studio turns 3×3 grids into upscaled, consistent shots
Cinema Studio grids (Higgsfield): Higgsfield’s Cinema Studio is shown generating multiple 3×3 cinematic grids from a single reference image, holding both character identity and location steady across all nine frames in each grid, according to the Cinema Studio overview. This gives artists a fast way to explore coverage and pick hero frames without losing continuity.
• Single-ref consistency: The shared example includes four distinct scenarios—an ominous sky over a city, a neon alley, a lakeside caravan at night, and a rain‑soaked balcony—each produced from one reference while preserving the same subject and environment across all nine tiles.
• One-click upscaling: Favorite tiles can be upscaled to higher resolution with a single button press inside Cinema Studio, turning exploratory grids into production‑ready stills for storyboards or keyframes, as highlighted in the creator’s note in the Cinema Studio overview.
⚖️ Face likeness risk and collapsing trust without disclosure
Today’s ethics pulse centers on unintentional likeness reproduction risks in close‑ups and fresh survey data showing rising consumer concern over undisclosed AI content. Excludes platform reach complaints.
Surveys show rising belief GenAI harms creators and fear of undisclosed AI
Creator trust and disclosure (eMarketer, Censuswide): New survey figures shared by Eugenio Fierro indicate that the share of consumers who see generative AI as harmful to the creator economy nearly doubled from 18% in November 2023 to 32% in July 2025, based on research by Billion Dollar Boy and Censuswide, as summarized in the ai trust thread. The same analysis reports that 52% of people worry brands are publishing AI-generated content without disclosure, and that enthusiasm for AI-made creator content has fallen while scepticism about its role in the creative economy has grown.
• Trust hinges on transparency: Fierro argues that audiences are not opposing AI as a tool in itself but reacting when it is used as an undisclosed substitute for human creators in influencer work, advertising, and branded stories, which they perceive as inauthentic ai trust thread.
• AI exposes weak strategies: He frames AI not as destroying the creator economy but as exposing shallow creative strategies—when brands lean on undisclosed automation rather than clear human-led ideas, audience trust and long‑term brand value erode quickly ai trust thread.
AI close-ups resolving into real actors raise hidden likeness liability
Face likeness risk (Google Nano Banana Pro): A creator describes how a supposedly generic female character in a kissing scene—prompted without any facial description or image reference—resolved into the clearly recognizable likeness of a real, professionally represented actress when they asked Nano Banana Pro for close‑ups, as detailed in the likeness risk post. That means a workflow that followed standard "safe" practices for original characters can still output an unrequested real person, turning a fully AI-generated shot for film, ads, or branded content into a potential rights‑of‑publicity and defamation problem once the work is widely distributed.
The thread lays out a realistic path where a national AI-made commercial runs successfully until an agent contacts the brand claiming their client’s image appears without consent, shifting the conversation from creative experimentation to legal exposure for agencies, platforms, and clients who believed they were working with anonymous faces likeness risk post.
📈 AI economics and reasoning snapshots for builders
Fewer papers, more macro: inference costs keep falling, open models narrow the gap, and new VLM/LLM methods and checkpoints surface. Useful for scoping budgets and method picks. Excludes creative UI tips.
EpochAI shows 10× drop in LLM inference prices since 2023
Inference prices and compute (EpochAIResearch): EpochAIResearch’s year review reports that LLM inference prices have fallen over 10× between April 2023 and March 2025 at equivalent performance levels, while price declines vary sharply by task—from roughly 9× per year on MMLU at GPT‑3.5 level to around 900× per year on GPQA at GPT‑4 level, according to the recap shared by koltregaskes Epoch summary. The same analysis notes that installed NVIDIA AI compute has doubled about every 10 months since 2020, that OpenAI spent an estimated $4.5B of compute on experiments versus about $400M on GPT‑4.5 training and $2B on inference, and that top open‑source models running on consumer GPUs now trail current frontier models by less than a year on benchmarks like GPQA and MMLU.
The report also estimates an average GPT‑4o chat query uses around 0.34 Wh (similar to a small light bulb for a few minutes), highlights DeepSeek v3 reaching frontier‑level performance with about 10× less compute than Llama 3, and argues that reinforcement learning for reasoning is driving big math and coding gains but may soon hit infrastructure bottlenecks if compute scaling slows Epoch summary. Overall, the numbers describe a world where inference keeps getting cheaper and more capable, but where energy use, hardware supply, and budget for massive training runs become the main constraints rather than raw model quality.
Guardian analysis warns AI may cut half of entry-level white-collar jobs
AI job displacement forecasts (Guardian): A Guardian opinion piece summarized by koltregaskes argues that AI could eliminate around half of entry‑level white‑collar jobs within 1–5 years, which could push US unemployment into the 10–20% range if new roles do not appear quickly enough Guardian recap. Labor journalist Steven Greenhouse frames this as a distributional crisis rather than a purely tech story, warning that AI‑driven inequality could create a new underclass unless policy changes catch up.
The recap cites Anthropic CEO Dario Amodei’s warnings of rapid losses in entry‑level roles, a Bernie Sanders report estimating 97 million US jobs at risk over the next decade, and MIT economist Daron Acemoglu’s distinction between “anti‑worker” AI (focused on automation and surveillance) and “pro‑worker” AI (augmenting skills and productivity) Guardian recap. It also notes that limited Biden‑era moves against AI surveillance were reversed under Trump via an executive order pre‑empting state regulations, and surveys proposed responses such as retraining, stronger unemployment insurance, universal healthcare, shorter work weeks, and worker input into AI deployment.
Databricks CEO calls billion‑dollar, zero‑revenue AI startups a bubble
AI funding bubble signals (Databricks): Databricks CEO Ali Ghodsi says parts of the AI startup market are in a clear bubble, calling companies valued at billions of dollars with zero revenue “insane” in comments highlighted by ai_for_success from a Fortune article Databricks bubble. He argues that the situation will likely worsen over the coming year before a correction, contrasting richly funded, revenue‑less model companies with Databricks’ own position as a $134B software firm with substantial data and analytics business.
Ghodsi’s remarks land against a backdrop of heavy VC and corporate spending on foundation models and agent platforms, and they underline a split between infrastructure providers with established cash flows and speculative model or app players that depend on continued capital access more than customer demand Databricks bubble. For teams building on AI, this points to a funding environment where core infra may remain stable while some high‑valuation vendors could face sharp revaluations if growth or monetization lags.
GLM 4.7 hits #2 overall and top open-weight on Website Arena
GLM 4.7 rankings (Zhipu / GMI Cloud): GMI Cloud announces that GLM 4.7 is now live on its platform and has climbed to #2 on Website Arena overall while taking the #1 slot among open‑weight models, a 15‑place jump over GLM 4.6 on the same benchmark GLM 4.7 stats. The tweet frames GLM 4.7 as the strongest open model in that leaderboard snapshot, positioned just behind one proprietary frontier model.
Although detailed metrics and prompts are not included in the post, the ranking suggests that GLM 4.7 has materially narrowed the gap between open and closed systems on this particular evaluation, and it reinforces the pattern from EpochAI’s analysis that top open‑source checkpoints now trail state‑of‑the‑art by months rather than years GLM 4.7 stats. For teams standardizing on open weights for cost or control reasons, this makes GLM 4.7 another candidate to test alongside Llama 3, DeepSeek v3, and similar high‑end releases.
GTR-Turbo merges checkpoints into a free teacher for VLM training
GTR‑Turbo RL method (HuggingPapers): HuggingPapers highlights GTR‑Turbo, a reinforcement learning approach where a vision‑language model’s own training checkpoints are merged to act as a “secretly free teacher,” instead of relying on an external, more powerful teacher model GTR-Turbo summary. The method is pitched as a way to boost reasoning and robustness in VLMs while avoiding the extra inference and licensing costs of a separate teacher during RL fine‑tuning.
The post frames GTR‑Turbo as a novel RL recipe rather than a single model release, suggesting that the technique could be reused across different VLM architectures if it proves stable and effective GTR-Turbo summary. Practically, this adds another option to the toolbox for teams experimenting with low‑cost RL for multimodal models, especially when access to large proprietary teachers is constrained by budget or policy.
MiniMax-M2.1 lands on Hugging Face for community LLM runs
MiniMax‑M2.1 (MiniMax): _akhaliq notes that MiniMax‑M2.1 is now available on Hugging Face, opening the latest MiniMax checkpoint to community inference, fine‑tuning, and benchmarking workflows MiniMax release. The tweet links directly to the model page, signalling that weights are accessible rather than gated behind a proprietary API.
For builders, this places another strong Chinese‑origin model alongside Llama, Qwen, and GLM in the open‑weight ecosystem, giving small studios and independent researchers a new option to test on their own hardware or preferred cloud rather than relying solely on paid endpoints MiniMax release. Precise benchmark numbers and training details are not included in the tweet, so performance comparisons to peers will depend on third‑party evaluations of the Hugging Face release.
ThinkARM restructures LLM math traces into four reasoning steps
ThinkARM reasoning framework (HuggingPapers): A HuggingPapers thread describes ThinkARM as a way to abstract LLM mathematical reasoning traces into a small set of functional steps—Analysis, Explore, Verify, Reflect—rather than long, unstructured chains of thought ThinkARM summary. The idea is to turn messy reasoning transcripts into composable operations that can be inspected, scored, or trained more systematically.
By reframing math problem solving as transitions between these four modes, ThinkARM aims to make it easier to study where models go wrong (for example, failing in Verify vs. mis‑framing the Analysis) and to design training signals that reward the right kinds of intermediate behavior ThinkARM summary. For researchers working on reasoning‑heavy agents or tutoring systems, this offers a conceptual scaffold for both dataset design and evaluation even though the tweet does not expose concrete benchmark gains yet.
📉 Creators vs the feed: engagement farming and shrinking payouts
Multiple posts vent about X’s reduced reach, revenue share compression, and sponsored saturation—community sentiment that shapes where creatives publish. Excludes ethics/IP risk items.
Creators push X to act on engagement farming as organic posts stall
Engagement farming on X (AI creatives): AI-focused creators complain that low-effort engagement-farming accounts dominate the feed while original work struggles to get seen, with calls for X to intervene before more people leave the platform, as voiced in the initial complaint by @techhalla and the follow-on replies from peers Engagement farming rant, Creator reply , Exit warning , Platform switch plan , YouTube alternative .

• Frustration with the feed: One creator describes "garbage accounts" getting views from obvious engagement-bait while serious posts with original AI content get buried, asking X to "do something about this" to protect people investing time and effort in real work Engagement farming rant.
• Talk of migration: Replies show others "hate this" dynamic and ask when it will be fixed, with explicit plans to focus on other platforms from 2026 and YouTube mentioned as a more viable home for AI content Creator reply, Exit warning , Platform switch plan , YouTube alternative .
For AI artists, filmmakers, and tool educators who rely on X for discovery, this thread captures a growing sense that the feed’s incentives favor low-quality virality over craft and may start actively pushing serious creators elsewhere.
AI illustrator reports falling X reach and shrinking Creator Revenue Sharing
X creator monetization (Artedeingenio): AI illustrator and style-reference creator @Artedeingenio says the X algorithm is "reducing post reach more and more" even with almost 50,000 followers, while Creator Revenue Sharing payouts "keep shrinking," concluding that these are "not good times" to be a content creator on the platform Reach and payout gripe.
• Reach vs follower count: The post highlights a widening gap between follower numbers and actual impressions or engagement, suggesting algorithm changes are throttling visibility for long-time accounts focused on original AI art and workflows Reach and payout gripe.
• Revenue share pressure: Even as more creators lean into X’s revenue programs, this account reports that the creator share is trending downward, weakening the financial case for investing serious time in platform-native AI tutorials, srefs, and narrative threads Reach and payout gripe.
For AI creatives who treated X as both a portfolio and a modest income stream, this adds to evidence that financial upside is shrinking and emotional motivation is becoming the main reason to stay active.
Higgsfield-sponsored posts saturating X feeds raise independence questions for AI creators
Sponsored saturation on X (Higgsfield promos): @Artedeingenio remarks that their For You tab "sometimes" shows nothing but Higgsfield-sponsored posts, saying it feels like the company has "money to buy everyone out" and wondering if any AI content creator is not "on the payroll" besides them Sponsored feed complaint.
• Sponsor dominance in feeds: The complaint comes directly under a Midjourney style-creator thread that showcases a new cinematic sketch illustration look, implicitly contrasting organic style research work with the volume of sponsored Higgsfield placements that appear to crowd it out Sketch style thread, Sponsored feed complaint .
• Perceived loss of independence: The post frames the issue less as “ads are annoying” and more as concern that most visible voices are now paid or sponsored, which can blur the line between honest tool evaluation and marketing when AI artists, animators, and educators decide which models or platforms to explore Sponsored feed complaint.
For AI creatives relying on X to follow authentic peers and discover new tools, this points to rising unease about feed integrity when one vendor’s paid presence begins to feel inescapable.
🎁 Deals and contests worth a quick grab
Promos and showcases that can stretch budgets or boost visibility: discounts, code drops, and creator tourneys. Excludes product availability notes already covered elsewhere.
Freepik users get unlimited Banana Pro generations on Pro plans until Feb 2
Banana Pro unlimited (CharaspowerAI/Freepik): CharaspowerAI announced that Nano Banana Pro has gone unlimited on Freepik for all Pro subscribers until February 2, effectively removing per‑generation caps for that window according to the holiday message in the Banana Pro unlimited. The promo is framed as an early Christmas gift for video and image makers already paying for Freepik’s Pro tier, turning Banana Pro into an all‑you‑can‑use engine for thumbnails, posters, and AI‑assisted storyboards during the campaign period.
The unlimited allowance shifts Banana Pro from a credit‑constrained experiment into a primary production tool for many Freepik users, at least through early February.
Pollo AI launches GPT Image 1.5 with 50% off and 115-credit code blitz
GPT Image 1.5 (Pollo AI): Pollo AI rolled out GPT Image 1.5 access with 50% off for all users this week plus a limited 12‑hour promo where following, retweeting, and replying “GPT Image 1.5” yields a 115‑credit code, as described in the launch pitch in the GPT Image launch. The model is positioned as OpenAI’s new “standard for image perfection,” with Pollo emphasizing uses like commercial posters and consistent character design in follow‑up messaging in the use case teaser.

The bundle of a platform‑wide discount and time‑boxed credit drop makes it cheaper than usual for visual creators to trial GPT‑powered art pipelines or run high‑volume experiments while credits last.
Hedra runs 30% off Creator and Pro plans for holiday video makers
Hedra Creator & Pro (Hedra Labs): Hedra is offering 30% off its Creator and Pro subscriptions, framed as a way to keep all AI video tools in one place while the discount lasts, according to the holiday promo in the Hedra holiday sale. The deal is claimable by commenting “HEDRA HOLIDAYS”, after which the team DMs a unique code, and the ad highlights streamlined editing, asset management, and multi-scene workflows aimed at solo creators and small studios.

The offer targets people already experimenting with AI-generated edits and thumbnails who want to consolidate into a single paid workspace instead of juggling separate tools.
NoSpoon and Infinite Films push creator contest and Tournament of Champions
Tournament of Champions (NoSpoon Studios & partners): NoSpoon Studios and Infinite Films are actively promoting a creator contest that culminates in a Tournament of Champions reel, with GreenFrog Labs showcasing a fast‑cut montage of AI‑driven fight scenes and character work in the tournament overview. GreenFrog Labs notes their own Lab is entering the competition in the Lab competition, while NoSpoon highlights standout submissions like Kallenia’s and Barb’s agent‑driven films and stresses that there are only two days left to submit entries in the Kallenia spotlight and Barb entry call.

The event functions as both a visibility boost and a peer benchmark for AI filmmakers working with toolchains like NoSpoon’s own agents, with finalists showcased in shared reels that circulate beyond the individual creators’ feeds.
ImagineArt runs last-call Christmas bundle at 81% off
Christmas deal (ImagineArt_X): ImagineArt_X is running an 81% OFF Christmas promotion on its AI offerings, framed as a “last call” with only a few hours left before the discount disappears according to the reminder amplified in the Christmas sale alert. The tweet pitches it as Santa being “done giving you what you asked for,” signaling that this is an expiring seasonal bundle rather than an ongoing price cut.
The steep discount and tight time window make it a one‑off chance for AI image creators invested in ImagineArt’s ecosystem to lock in access at a fraction of usual pricing before the sale shuts down.
New free portfolio tool offered to AI designers and artists
Free portfolio platform (Azed AI): Azed AI flagged a totally free portfolio solution aimed at designers, AI artists, and anyone needing a place to showcase work, encouraging creatives to “check this” in the brief promo in the free portfolio mention. While the tweet does not name the product or impose a time limit, it positions the tool as free to use rather than a trial or discount, targeting people who want a dedicated home for AI‑assisted visuals and projects without adding another subscription.
For freelancers and small studios building around generative tools, the offer lowers the barrier to standing up a public portfolio separate from social platforms.
🤖 Practical agent stacks for creative workspaces
Light but actionable: a rapid Ai‑Agent‑Skills update and an Obsidian setup mixing Claude, Gemini, and Codex CLIs for collaborative file‑based workflows. Excludes cinematic toolchains.
Ai-Agent-Skills v1.6.0 adds multi-agent install and GitHub repo skills
Ai-Agent-Skills v1.6.0 (SkillCreatorAI): The Ai-Agent-Skills project ships v1.6.0 with a focus on practical orchestration, adding multi-agent install plus the ability to install any GitHub repo as a skill, after "1 week, 15 releases, 285 stars" as stated in the Ai-Agent-Skills update. A separate endorsement from a creator-focused account calls the AI agent skills on their GitHub "seriously solid and practical," signaling traction among people building real creative workflows skills endorsement.
• Multi-agent install: The new installer can now set up multiple agents in one go, which aligns with how creative stacks often mix separate roles for writing, research, storyboard planning, and asset generation Ai-Agent-Skills update.
• GitHub-as-skill: Treating any GitHub repo as an installable skill lowers the friction to plug niche tools (e.g. custom prompt packs, asset taggers, export scripts) directly into an agent stack without bespoke wiring Ai-Agent-Skills update.
• Adoption signal: The 285‑star figure in a week plus explicit praise from AI art educators suggests this is evolving into a shared skill layer, not a single closed agent product skills endorsement.
Obsidian workspace chains Claude, Gemini, and Codex CLIs as file-based agents
Multi-LLM Obsidian stack (koltregaskes): One practitioner describes an Obsidian setup where Claude Code, Gemini CLI, and Codex CLI all run inside the same workspace, each acting as a specialized agent that reads and writes markdown files in its own folder Obsidian setup. The user assigns roles like "Nano Banana for Gemini," "deep research for Code," and "manager for Claude," and notes they "wish I could get them to talk directly," highlighting both the power and current friction of file-based coordination Obsidian setup.
• File-mediated collaboration: Instead of direct agent‑to‑agent messaging, each CLI agent exchanges state via .md files, which is easy to audit and version but slower than a shared memory bus Obsidian setup.
• Role specialization: Splitting responsibilities—research vs. coding vs. orchestration—mirrors how creative teams divide work across concepting, scripting, and production tools, but here it is all routed through one note-taking app Obsidian setup.