The AI Future Is Bright But You Gotta Look At Shades Of Perspective

The rise in AI and machine learning is nothing new in our world, or in our industry. With AI-generated artwork and stories gaining online attention, it has been a key point of conversation among entertainment professionals and audiences alike. Most recently, the Screen Actors Guild – American Federation of Television and Radio Artists is on the precipice of a strike with AI being one of the gatekeeping issues, and the Writers Guild of America is on strike with AI as a material agenda item. So, what does AI mean for the humans behind the content we create and consume? How does it alter the future of the entertainment landscape as we know it? And could it inadvertently lead to our own Terminator-esque “Judgment Day”?

All these questions and more made it a timely topic to tackle at the virtual artificial intelligence panel at the 47th Annual UCLA Entertainment Symposium. Each year, this event brings leading lawyers, executives, agents, managers and producers together along with students to discuss the latest trends and insights in entertainment. One of the panels in the 2023 lineup delved into the ever-growing impacts of AI and machine learning in the industry. It left the audience feeling as it had just watched the modern day installment of The Terminator, except it’s actually real life and all happening very fast.

As Co-Chair of the annual entertainment law symposium, my view was that this particular panel was exactly like watching a movie you cannot get out of your head. Each panelist represented a unique and valid perspective of AI and I think each point of view should be contemplated as we propel legislation, ethics, labor regulation, litigation, and capitalism onto this rapidly sophisticated technology of AI.

“Profit motive is the most dangerous factor,” said panelist Ted Schilowitz, Futurist-In-Residence at Paramount
PARA
(yes, that is an actual job title and the coolest one I have seen at a studio in a long time).

Schilowitz’s view is that of a true visionary. He pointed out that AI stands to disrupt all forms of creative pursuits particularly where we are inspired and influenced by past creations (from entertainment to lawyers to any other endeavor where creative human thinking can be enhanced or perhaps replaced by AI). He believes that utilizing blockchain technology as a means to link a body of work to unique name and likeness allows for the owner of their image to track their use and make a decision in each and every instance as to whether the use is actionable, or one that should be compensated or okay to let go without consideration.

What Schilowitz is presenting as the future is where AI creates the content and then blockchain monitors the use – now we are starting to a feel a little RoboCop. But his point is valid. The sheer inability for humans (even with a deep bench of reps) to track how AI may or may not use their image or copyrightable materials is an inevitable issue and building an accountable solution now is an essential part to ensure that the profit motive is honest. Schilowitz also foretold that while the 2020s are for bandwidth and data that in a not-too-distant future, (maybe even as soon as the 2040s) we will reach the age of the actual entertainment itself for technology and AI. Therefore, establishing these measurable practices now is essential to ensure that those who profit are the ones who actually “earned” it.

For a successful and prolific talent lawyer like panelist P.J. Shapiro, founding partner at Johnson Shapiro Slewett & Kole LLP, the key for now is to ensure that his high-level talent clients are preserving their “private cause of action.”

When asked about what a talent attorney would do when AI amalgamates thousands of performers to create the ultimate bad guy, Shapiro said he is rightfully in a “wait and see” period. At least while legislation, litigation and perhaps even emerging class action lawsuits occur as well as resolution from talent unions.

Another panelist, Travis Cloyd, CEO of Worldwide XR and Global Futurist at Thunderbird School of Global Management, took an optimistic approach but emphasized the need for those on the forefront of AI to focus on the ethical application of AI. His company focuses on creating images and utilizing AI for estates that he manages and in the endeavor to “deepfake” a performer’s work. For Cloyd, one has to follow ethical guardrails to ensure that the technology is not violating our principles around how performances and images are used. With businesses established and being run by people like Cloyd, it allows for our society to start establishing foundational principles as to what are culturally acceptable ethical practices around AI. This is important, as it will influence how legislation adopts these ethical practices and impacts how inevitable litigation will unfold in this space.

With comprehensive questions from moderator Nathaniel L. Bach, partner at Manatt Phelps & Phillips, LLP, this panel went from deepfakes to pondering whether or not you would implant a device in your body for entertainment experiences (just like one implants a hearing aid or heart monitor). Most of the panel apprehensively accepted that the notion wasn’t if, but rather when. Bach’s legal point of view sets a framework for our immediate approach to AI where we definitely need legislation to vet these emerging use cases and when we use AI to explore our creative curiosity, we need a “baseline to determine where inspiration is crossing a legal line.”

For now, let’s pre-order our entertainment implants and personally use our profiteering AI endeavors as ethically as possible until we are regulated to be monitored by blockchain.

Source: https://www.forbes.com/sites/elsaramo/2023/06/22/the-ai-future-is-bright-but-you-gotta-look-at-shades-of-perspective/