Lightricks, the Israeli AI startup best known for viral mobile apps like Facetune and Videoleap, is pushing deeper into professional production territory with a technical milestone that sets it apart from its peers in generative video. With the release of its new autoregressive video model, LTXV, the company claims it can now generate clips over 60 seconds long, eight times the current standard length for AI video. That includes OpenAI’s Sora, Google’s Veo, and Runway’s Gen-4, none of which yet support real-time rendering at this scale.
According to CEO and co-founder Zeev Farbman, this breakthrough “unlocks a new era for generative media,” not just because of length, but because of what extended sequences enable: narrative. “It’s the difference between a visual stunt and a scene,” Farbman told me in a recent interview. “AI video becomes a medium for storytelling, not just a demo.”
LTXV’s new architecture streams video in real time, returning the first second almost instantly and building the rest on the fly. The system uses small chunks of overlapping frames to condition what comes next, allowing continuity of motion, character, and action throughout the sequence. It’s the same autoregressive approach that powers large language models like ChatGPT, applied to visual storytelling frame-by-frame.
I saw the demo working on a Zoom call last week. Most systems, including top models like Veo 3, Runway 4, and Kling, make you wait minutes for generations. LTX is much faster. The system rendered a continuous 60-second scene of a woman cooking as a gorilla entered the kitchen and hugged her. The video streamed as it was generated, with very few pauses. Another scene showed a car passing under a bridge, then emerging on the other side, then continuing its journey—all without jarring cuts or jumps in logic.
Particularly notable is that LTXV is open source, not locked behind a proprietary API. The model will be made available as open weights on GitHub and Hugging Face. It’s free to use for individuals and small teams generating less than $10 million in revenue. Farbman says this aligns with Lightricks’ strategy of “open development for real-world application,” empowering both indie creators and developers to build on the core engine.
From a technical perspective, the new model is fast and light. It runs on a single Nvidia H100, or even on high-end consumer GPUs. By contrast, Farbman points out, public benchmarks for other models often require multiple H100s just to produce five seconds of high-resolution video.
The implications go far beyond YouTube clips. Lightricks envisions uses in advertising, real-time game cutscenes, adaptive educational content, and augmented reality performances. Imagine an AR character performing onstage with a musician, rendered live and reacting in real time.“We’ve reached the point where AI video isn’t just prompted, but truly directed,” added Yaron Inger, co-founder and CTO. “This leap turns AI video into a longform storytelling platform, and not just a visual trick.”
This is part of a broader roadmap for LTX Studio, the company’s browser-based production platform that offers script-to-scene authoring, character tracking, and style consistency. Multimodal support, including motion capture and audio-based conditioning, will be released soon. Next up: 4K video output and seamless frame interpolation for smoother motion.
Farbman was quick to acknowledge that there’s still work to be done. “Prompt adherence in longform content is the next big frontier,” he said. “We’re seeing dramatic improvements, but scenes with complex interpersonal action are still hard.” Still, what I saw was far beyond what most AI video tools can manage today.
As for monetization, Farbman says Lightricks is in talks with larger studios and platforms about commercial licensing and revenue share deals, while keeping development open for the broader creative community. “We believe AI filmmaking shouldn’t just be for engineers,” he said. “It should be for storytellers.”
Source: https://www.forbes.com/sites/charliefink/2025/07/16/ltx-video-breaks-the-60-second-barrier-redefining-ai-video-as-a-longform-medium/