OpenAI has removed the Sora app as creators and Hacker News users debate whether novelty never turned into durable usage. Save projects now and plan to test ChatGPT-integrated or rival video tools next.

Posted by mikeocool
The Twitter post from @soraofficialapp announces: "We’re saying goodbye to the Sora app. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team." This indicates the discontinuation of the OpenAI Sora AI video generation app and related services.
OpenAI’s message is short but consequential: the company is “saying goodbye” to the Sora app and says more details are coming on app and API timelines, along with project-preservation steps. That means creators who used Sora as a dedicated destination for text-to-video or video experimentation now have an immediate continuity problem, even before the migration details arrive.
A creator post translating the news into workflow terms says the standalone app is being folded into a broader ChatGPT experience, citing The Hollywood Reporter for that claim creator translation. Until OpenAI publishes those specifics itself, the only confirmed facts are the shutdown and the promise of later guidance on preserving work.
Posted by mikeocool
For creatives, the discussion suggests that AI video tools still struggle to move beyond novelty into something people keep using. Commenters describe Sora as fun but disposable, and several argue that the biggest visible outputs of video gen so far have been more hype, deepfake risk, or weird viral clips than dependable creative workflows.
The most concrete critique is about repeat usage, not output quality alone. In the thread summary, commenters say Sora had strong novelty value but weak staying power, with one argument that usage fell off within weeks—fatal for a compute-heavy consumer product without an obvious path to sustained revenue.
Posted by mikeocool
Today’s discussion adds a sharper economic read: multiple commenters argue Sora was a flash-in-the-pan consumer product whose usage quickly fell off after the novelty wore off, making it hard to justify the compute cost without a clear monetization path. Others contrast it with more durable AI businesses, especially coding tools, and suggest OpenAI is reallocating scarce GPU capacity toward uses that are easier to monetize or that fit a more sustainable product strategy. There’s also fresh thread-level skepticism about the original hype. Commenters say the app never lived up to its “reality simulator” aura once it was broadly available, and some frame the shutdown as evidence that generative video for consumers is still more novelty than habit-forming utility. A few comments pivot to the broader social downside of video generation—deepfakes, disinfo, and low-quality viral content—though that is more a reaction than a new factual development.
That view hardened as the thread grew. The fresh discussion frames the shutdown less as a verdict on all AI video and more as a resource-allocation decision: if GPU capacity is limited, coding and enterprise tools look easier to justify than a creator app people sample a few times and stop opening. Some users also said Sora never matched its original “reality simulator” aura once it was broadly available.
Posted by mikeocool
Thread discussion highlights: - eigenvalue on compute economics: Sora had fun novelty value, but usage collapsed to zero after a couple weeks; the commenter argues that’s devastating for a product with huge compute costs and no short-term monetization path. - bbayer on coding tools as durable AI use case: This comment says the move is logical because OpenAI should assign GPU time to more profitable businesses, contrasting Sora with text/code generation targeted at developers and enterprises. - christianqchung on consumer video apps: The commenter says the original Sora launch was boring to them as a non-creative user, and that disinfo/deepfake examples have soured expectations for genuinely positive creative uses of video gen.
For creative workflows, the immediate issue is not a new feature set but a missing home base. OpenAI has acknowledged the need for preservation details, which implies projects, exports, or API-dependent pipelines may need attention once those instructions land.
The reaction from creators is blunt. One user replied that they used Sora only about three times and did not know anyone relying on it for practical work usage reaction. That anecdotal view matches the larger thread: the shutdown is being read as evidence that consumer AI video still struggles to become a habit-forming production tool, even as rival tools remain available and the broader category keeps moving.
A Freepik Spaces walkthrough shows how creators are combining camera-shot footage, Nano Banana 2 images and Kling Motion Control in one music-video pipeline. Use it when you want stylized performance pieces without juggling as many separate tools.
updateCapCut is expanding Dreamina Seedance 2.0 while Topview restored access within 24 hours, and creators are stress-testing it for vertical repurposing, long prompts and stylized start frames. Try it for fast video conversions, but budget cleanup passes for continuity and transitions.
promptCreators are turning Nano Banana 2 templates into reusable prompt systems for merch shots, sports ads, editorial portraits and modular scene builds. Keep the scaffold fixed and swap only brand, lens, action or environment variables to iterate fast.
workflowRiverside's Co-Creator reads transcripts automatically and turns chat-style requests into cuts, captions, thumbnails and social copy from one workspace. Use it when you need fast repurposing without timeline scrubbing, then polish the output by hand.
releaseSmallest says Lightning V3.1 can clone a voice from about 10 seconds of audio with 44.1kHz output, sub-100ms latency and 50-plus languages on Waves. Test it for multilingual narration and dubbing, but get explicit permission before cloning any voice.