If it seems like conversations about artificial intelligence and cloud computing are everywhere these days, not just in rarified tech and investor circles, it’s true. And nowhere is that more true than in the entertainment business, where the creative and business classes are both concerned about what possible implications for the craft they’ve honed for years, and excited about the technologies’ democratizing power.
At the recent Sundance Film Festival, I talked with numerous directors, editors and others using the new AI and cloud-to-camera capabilities transforming the film and TV production process, and executives with Adobe, which makes the widely used editing software Premiere Pro and other tools to enable creativity.
It is a remarkable time, fast-evolving, with breathless predictions both positive and negative, but also some great opportunities to remake the way film and TV (let’s just call them video-based) have been made for more than a century.
The industry’s embrace has only just begun. Comcast
Stars and their representatives are noticing too, as AI technologies pop up in even specialized corners of the industry.
For instance, talent agency CAA signed a strategic partnership with Metaphysic, which is providing “de-aging” tools in Here, a Miramax feature based on a graphic novel. Stars Tom Hanks and Robin Wright will play characters in a single room as they age over many years, from their long-ago youth onward.
For an agency, the appeal of such technology is obvious: their biggest stars/clients are suddenly viable in more roles, for decades longer.
The Bracing Possibilities of The New Technology
But day to day, the possibilities are bracing for editors, directors and others involved in the complex task to creating a video-based project. In conversations with the budding auteurs whose indie and documentary projects fill the Sundance schedule, it’s clear they see Premiere Pro as a high-speed collaborative platform enabling an increasingly wide array of possible creative uses, by an increasingly wide array of possible creators.
Adobe’s
By that, she means how can AI augment human creativity, rather than replace it, as some fear. AI-fueled tools could empower creators with modest or no technical skills, resources, or connections to fashion compelling projects in video and other media. Even now, for professional creators, AI reduces the grind of repetitive tasks like wading through footage for a specific scene, allowing editors and other creators to focus on the (literally) bigger picture.
But it’s bigger than that, said Michael Cioni, Adobe’s senior director of global innovation, and a long-time champion of camera-to-cloud technologies with Adobe-owned Frame.io.
Every Digital Asset A Cloud Asset Too
He puts it simply: “By the year 2031, every electronic asset that is generated in media and entertainment will be generated in the cloud, by the cloud. That’s the truth. I call that a technological certainty, that every electronic asset that gets generated by a device will be generated in the cloud” instead of on, say, a local hard drive, celluloid, videotape, or memory card.
In a familiar cycle of media transformation, text documents were the first to head to the cloud (and to AI). Now they’re routinely created, shared and reshaped on cloud-based services such as Google Docs, Apple Pages, Microsoft Word and Adobe’s Acrobat. Hundreds of millions of people listen to (and create, collaborate, and sell) music on the cloud, thanks to Spotify, Apple Music, Amazon Music, Tidal and others. Now, it’s changing the most complex media we create.
“Let’s say you’re doing a movie scene,” Cioni said. “You’ll be able to actually type in language, after shooting a scene: ‘Make it rain.’ And water will appear in the shot, it’ll start raining. ‘Make it snow,’ and it’ll start appearing in the shot. And you won’t need to be an engineer to do that. That really seems like science fiction, but it will become totally doable. We’re seeing it happen in the very early stages of still (photos), and we know it’ll cascade into video. And that’s a really important attribute of where we have to be thinking about how this affects creative people.”
That has a ton of implications for the future of making moving images. As Leonard Cohen put it a long time ago, as he moved from poet to novelist and singer/songwriter in his 30s, “the borders between a lot of endeavors have faded.”
Many younger filmmakers routinely cross those borders, doing a little bit of everything in making their projects, and collaborating routinely on digital platforms. With cloud-to-camera delivery and AI smarts, they can shoot a scene, quickly generate a rough “assembly” as a starting place for an edit, while determining if there’s enough for the scene to “work,” or more filming is needed.
They also can rough in basic visual effects, color correction, and even sound design before turning it over to specialists for fine-tuning. As such, Premiere has become more than a traditional non-linear video editing program for many Sundance creators.
Building A New Relationship Between Humans And Images
“This is one of our biggest passions: the relationship between humans and cameras, and how it impacts society,” said Maximilien Van Aertryck, co-director (and co-editor) of the provocative documentary Fantastic Machine, which won a Sundance special jury award for creative vision. “We’re like anthropologists who like to have fun.”
Their documentary gently calls for a considered approach to the world we’re creating with these new tools. As the old political joke went, “who you going to believe: me or your lying eyes?” Increasingly, we need to understand when our eyes are lying and the moving images should be believed, said Aertryck’s co-director/-editor, Axel Danielson.
“The film doesn’t have many answers,” Danielson said. “It has questions. The only answer we know is we as societies collectively need to take media literacy very seriously.”
The project collates more than a century of moving images, going back to pioneers such as Georges Melies (whose royal subject would exclaim what became the film’s name). Along the way, it stops at an unnervingly avid Leni Riefenstahl, doing play-by-play of her favorite shots from a Nazi rally at Nuremberg, and an ISIS fighter repeatedly botching his lines in a blooper reel from a terrorist recruitment video. It features far gentler bits too, but the point is, creators great and awful are going to be harnessing new tools in a lot of ways.
On Fantastic Machine, Van Aertryck and Danielson spent years collecting distinctive clips about the complicated relationship between humans/cameras and the images they create. Once it came time to assemble the film, they worked remotely with editor Mikel Cee Karlsson and executive producer Ruben Ostlund, the two-time Oscar-nominated writer-director (Triangle of Sadness) whose Plattform Produktion produced the film.
“We’re not trained editors; we do that because we have to do that,” Van Aertryck said. “Adobe becomes the perfect tool for us, because we can throw everything in there and share it back and forth.”
The four used Premiere’s cloud-based tools to trade clips and sequences like they were sitting around a table, Van Aertryck said, even though they were spread across Western Europe from the Baltic (Gothenburg, Sweden) to the Balearics (Mallorca). That’s a particularly far-flung example of the freedom it gives creators to come from about anywhere, and work just about anywhere, if they need or want to.
“Being able to share online makes a lot more possible,” said Crystal Kayiza, writer-director of Rest Stop, which won a Sundance jury prize for U.S. short film fiction. “If I want to go back to Oklahoma, I can do that if I’m in on (creating) a short.”
Last year, Adobe announced new direct-to-cloud capabilities in some popular cameras and other production devices that simultaneously transmits the video as it is captured. That lets editors, producers, executives and other post-production specialists begin work on footage as soon as a few minutes after it has been shot.
Even small productions are using the technologies. Cloud connections were essential, especially early on, in making Going Varsity in Mariachi, said Daniela T. Quiroz, who won a Sundance editing award.
Directors Alejandra Vasquez and Sam Osborn were still filming the documentary about high school dance competitions in Texas’ Rio Grande Valley when Brooklyn-based Quiroz began piecing together the project. Her early, and fresh, view of already-captured material helped shape subsequent filming and areas of focus.
“As editors, we have this unique privilege of seeing these people on camera for the first time,” Quiroz said. “You really get to see who you’re falling in love with and what music you’re falling in love with.”
Doing without those immediately accessible digital dailies, like the “old days” of just a few years ago, was a necessity in shooting Sometimes I Think About Dying on location in a remote part of Oregon’s coast. Instead, each day’s footage was loaded onto hard drives, then shipped overnight to New Orleans for ingest into digital intermediaries. It was a reminder of the improvements technology has brought, said editor Ryan Kendrick.
A New Kind of Collaboration
That allowed Nashville-based Kendrick to begin editing, collaborating with director Rachel Lambert and others across multiple states.
“How cool is it that I can work with who I want to work with, and we’re not tied to the same place,” Kendrick said. It also changes the dynamic between directors, cinematographers, editors and the rest of the production.
“You always felt isolated from the sets” in the past, Kendrick said. “Now, you feel more connected to it. You don’t feel so separated from the tone they’re making. The editing room is a really great place make tons of mistakes, and do the absolute opposite of what you want to do. I always say, ‘I don’t know. Let’s figure it out.’”
On Sometimes I Feel Like Dying, Kendrick, Lambert and Director of Photography Dustin Lane began talking weekly long before shooting actually commenced, to head off potential problems early.
“We would talk about the editing problems that might come up,” Kendrick said. “When you get the scripts, the hardest thing to think about is how the film is going to transition to the next part. Specifically, in the film, (the protagonists) go to the movies. We talked a lot about how we were going to get the images we needed to keep the emotional weight.”
The quick turnarounds possible with Camera-to-Cloud are particularly useful for short projects such as commercials and music videos, Kendrick said.
“Some of the turnarounds in commercials are wild,” Kendrick said. Being able to pass projects back and forth in Premiere Pro, Frame.io and After Effects really was only doable with a cloud-based architecture.
“It’s the biggest (delay in production workflows) that’s been eliminated in the last five years,” said Kendrick.
The Guiding Principles of Building Better Technology
Adobe’s approach to its technologies is built around five “guiding principles:” workflow, media, collaboration, productivity and security, Keane said.
“We really bucket AI into ‘productivity,’” Keane said. “It’s not about replacing the creatives. It’s about replacing the mundane.”
Hollywood’s labor unions are watching the fast-changing AI space closely. Puck reported that the board of Hollywood’s biggest union, SAG-AFTRA, plans to include the issue of likeness rights – using AI to recreate the look, voice or other aspects of its members – in upcoming contract bargaining.
For now, lawyers for talent have been advised that “any language in a contract that purports to control the right to simulate an actors’ performance, (is) ‘void and unenforceable until the terms have been negotiated with the union,’” Puck reported.
The complex new issue could make for some heavy-duty contract negotiations this spring. Younger and middle-aged performers wanting to extend their career opportunities may embrace some uses of the technology, but without strong contract protections, some outlets might use the tech as a way to avoid paying for digital performances that aren’t technically the actors.
One crude example of what eventually may came could be found for weeks on Twitch, where “watchmeforever” created an endless, AI-generated “episode” of Seinfeld. The “show” featured poorly inflected, computer-generated voices, simple graphics, and jokes that weren’t particularly funny. The show also suggests what may eventually be possible at a much higher quality level. It’s worth noting the endless loop was interrupted with a shutdown notice when the “Jerry” character’s stand-up routine veered into what was deemed an anti-trans routine.
Opening Up the World to Creative Pursuits
On a far more positive note, Adobe’s Keane points to the far broader access to creative tools that AI and cloud will power. It eliminates the need for big render farms to generate process images and sound, generate effects and the rest, while still providing the compute power and security that any moderately ambitious video project needs.
“If you can centralize the cloud, you can open up the possbility of having stories coming from all over the world,” Keane said. “Collaboration needs to be at every stage. Shared perspective either reinforces your ideas or challenges you further.”
As an example of the importance of security for filmmakers, Keane pointed to Plan C, a Sundance documentary about abortion. Its creators needed to do multiple blurs of the faces and voices of some on-camera interviews, and ironclad security for the unblurred original media as well.
The explosion in generative AI, deep fakes and related issues also means that authenticity is more important than ever, Keane said. Adobe is a member of the Content Authenticity Initiative, alongside other media and tech companies such as the New York Times
Keanu Reeves, star of the iconic sci-fi franchise The Matrix, is less sanguine. He recently told Wired that one malicious use of AI-assisted video, creating deepfakes, is “scary.” Reeves’ contracts routinely include a clause forbidding the studios to digitally manipulate his performance, a provision dating back decades to a production that inserted a virtual tear on his face.
“What’s frustrating about that is you lose your agency,” Reeves said of invasive practices. “When you give a performance in a film, you know you’re going to be edited, but you’re participating in that. If you go into deepfake land, it has none of your points of view. That’s scary. It’s going to be interesting to see how humans deal with these technologies.”
The Next Game-Changing Technology
The next edge-pushing technology, already available in beta on Premiere Pro, is so-called text-based editing, which allows a person to edit video by editing the text of dialogue extracted and transcribed from the video by AI. The AI also highlights important sound bites, can build an initial assembly of the final project, and generate keyword-based searches to quickly find specific clips or conversations. The data built for the project becomes a powerful new way to manage all that footage efficiently.
“You’re hydrating the media with all this rich data,” Keane said.
Some smaller companies have already launched their takes on similar technologies. Descript and Podcastle can create a digital version of your voice, then have it read text-based scripts, without needing more editing. Descript also recently added text-based editing for video, whether it’s podcasts or other content.
The editor Kendrick said he’s already used Adobe’s text-based editing tools to help a friend quickly turn around a mini-documentary based in Alaska. Text-based editing also is likely to be immensely useful for so-called “preditors,” producer/editors who typically must wade through days of footage to quickly assemble unscripted shows on tight timelines, Kendrick said.
Audio Books, Scripts, Social Media Getting AI Too
AI is edging into lots of other corners of entertainment too. Apple
There’s certainly reason to treasure the best work of those human narrators, and it may be a long time before an AI tool can come close to the compelling voices and performances of the best human readers on the best books.
But the cost of using that talent is far beyond the finances of tens of thousands of books released every year. That’s where Apple’s tool suite could provide access to another market for those small-time books, authors and publishers trying to reach a previously remote market already worth an estimated $1.5 billion worldwide.
Startup Filmustage uses AI to break down script components, including elements such as the cast, props, costumes, vehicles used, sounds, locations and more.
And on the short, short video side of the business, TikTok, Meta’s Reels and Alphabet’s YouTube Shorts are all depending on machine learning and AI tools to serve up the next “recommended” video they think you’re going to want to watch, and connecting ads to that experience in a way that’s more seamless and engaging.
Analyst Rich Greenberg of LightShed Partners called the Facebook shift to improved AI-driven content recommendations extremely expensive, but “ultimately it could prove (Meta CEO Mark) Zuckerberg’s second-most impressive pivot (after mobile) on a platform that has constantly evolved since its inception.”
Source: https://www.forbes.com/sites/dbloom/2023/02/24/how-ai-and-the-cloud-are-erasing-the-borders-in-making-movies-and-tv-shows/