Smallest says Lightning V3.1 can clone a voice from about 10 seconds of audio with 44.1kHz output, sub-100ms latency and 50-plus languages on Waves. Test it for multilingual narration and dubbing, but get explicit permission before cloning any voice.

The release centers on a narrow but useful promise: dramatically less source audio. According to the speed demo, Lightning V3.1 needs about 10 seconds to build a clone, versus the much longer samples many voice tools have historically required. The same launch thread claims 44.1kHz output and sub-100ms latency, which points to cleaner export quality and faster preview loops for creators working on voice-led content.
Smallest's tech breakdown says the model runs on Waves and supports 50-plus languages. The Waves entry point is already live, and the company says the tool is free to test.
The clearest creator angle is voice reuse without another recording session. In the use-case demo, the examples focus on narrating reels and tutorials in your own voice, generating podcast intros and ad reads, and using one English sample to speak other languages including Spanish, Hindi, French, and Japanese. That makes the release more interesting for dubbing and rapid social edits than for one-off novelty clones.
Quality is the open question in every cloning launch, and the side-by-side demo is the main evidence here. The post claims most listeners cannot easily tell the real and cloned voices apart, though that is still the company's own showcase rather than an independent test. Even so, the combination of short input, multilingual output, and fast generation is the real production shift.
KittenML's latest open-source TTS release spans 15M to 80M models, with the smallest coming in under 25MB and the larger one reportedly running faster than realtime on CPU. Audio creators should test pronunciation and install overhead before betting on it for edge or local voice tools.
updateOpenAI has removed the Sora app as creators and Hacker News users debate whether novelty never turned into durable usage. Save projects now and plan to test ChatGPT-integrated or rival video tools next.
updateCapCut is expanding Dreamina Seedance 2.0 while Topview restored access within 24 hours, and creators are stress-testing it for vertical repurposing, long prompts and stylized start frames. Try it for fast video conversions, but budget cleanup passes for continuity and transitions.
promptCreators are turning Nano Banana 2 templates into reusable prompt systems for merch shots, sports ads, editorial portraits and modular scene builds. Keep the scaffold fixed and swap only brand, lens, action or environment variables to iterate fast.
workflowRiverside's Co-Creator reads transcripts automatically and turns chat-style requests into cuts, captions, thumbnails and social copy from one workspace. Use it when you need fast repurposing without timeline scrubbing, then polish the output by hand.
You can now clone your voice in 10 seconds. Lightning V3.1 just dropped on Waves and it sounds indistinguishable from the real thing. 44.1kHz studio quality. Under 100ms latency. 50+ languages. Here's how with real examples:
The quality comparison is wild. Real voice vs. cloned voice side-by-side. Most people can't tell the difference. 44.1kHz output means it sounds better than most microphone setups creators are actually using.
Here's what you can actually do with it: → Narrate your reels and tutorials in your own voice without recording → Clone your English voice and speak fluent Spanish, Hindi, French, Japanese → Generate podcast intros, outros, and ad reads in seconds → Create accessibility Show more