Deepfake Detection Is A Booming Business

Welcome back to The Prompt,

Facebook, Instagram, WhatsApp, Threads. Now Meta is launching yet another app— this time focused on artificial intelligence. Meta AI is the social media giant’s answer to OpenAI’s ChatGPT. The standalone app is built on the company’s latest model, Llama 4, and allows users to spin up images and search for information. The app can be connected to user’s Meta accounts for a more personalized experience, as well. There’s also a voice mode to have conversations with the AI, but it doesn’t have real-time access to the internet.

Now let’s get into the headlines.

BIG PLAYS

The U.S. House of Representatives has passed the Take It Down Act, which makes it illegal to distribute nonconsensual pornographic images (including those generated with the help of AI) and requires social media platforms to remove such images within 48 hours of being reported. The bipartisan bill, endorsed by First Lady Melania Trump, comes as nonconsensual sexually explicit deepfakes rampantly spread across platforms like Reddit, Ebay and Etsy and many more after a surge in popularity of AI tools.

TALENT RESHUFFLING

Language learning app company Duolingo plans to stop paying contractors for the work that can be done by AI, its billionaire CEO Luis Von Ahn said in an all-hands email to employees. It also plans to make AI use a deciding factor of performance reviews and hiring and only allocate human headcount to jobs that can’t be automated. The company, which is building an AI tutor to help people learn new languages, has added a slew of AI abilities within its app from an interactive game to a video calling AI “friend.” Shopify CEO Tobi Lutke recently shared a similar note with his employees regarding AI use.

ETHICS + LAW

Meta’s AI companions, often modeled after popular celebrities and different characters, can engage in sexually explicit and romantic role play conversations with underage users as well as adults, according to multiple tests conducted by The Wall Street Journal. Senior leaders at the social media behemoth were reportedly aware of the chatbots’ tendencies to foray into risque and explicit discussions and multiple staffers flagged their concerns internally.

DEEP DIVE

In January of last year, Atlanta-based startup Pindrop, a robocall and fraud-busting platform used mostly by call centers, had its 15 minutes of fame by defending the president. AI technology was being used to clone and impersonate former President Joe Biden’s voice in New Hampshire, discouraging Democrats from voting. Pindrop was referenced across national media outlets as it accomplished what only a few in the space could: it identified the fraud at play and leveraged its massive collection of audio recordings to figure out what technology was used.

Flash forward more than a year, and Pindrop has passed a new milestone in its more than 10 years of operations by reaching annual recurring revenue (signed contracts) worth more than $100 million. That growth is built on an increasingly lucrative offering in this new age of AI: Fighting deepfakes, or digitally created hoax recordings, images or videos, often used for nefarious reasons. “Its growth reflects both the urgency of the challenge and the standout accuracy of its platform,” Martin Casado, a general partner at Andreessen Horowitz, a Pindrop investor, told Forbes.

Pindrop offers three main products that combat fraud and identity theft. Its core products authenticate phone calls by verifying the caller’s voice or if they’re calling from a trusted device. In 2024, it bolstered its offerings with a new product to use AI technology and determine if the caller is a machine or not. Pindrop’s services are already used at the call centers of eight of the ten largest banks, to screen calls, identifying suspicious speech patterns and outing fraudsters. And the company has been making inroads into health care and retail in recent years.

Fighting voice impersonation hasn’t always been a booming business. Pindrop entered the deepfake space in 2017 and quickly was noticed for identifying false voice clips from a documentary about chef Anthony Bourdain in 2018. These early detection abilities would evolve into its proprietary deepfake-identifying product.

Read the full story on Forbes.

WEEKLY DEMO

OpenAI has added new shopping-related features to ChatGPT that allow people to search for products, compare them based on reviews and get visual details about them. The search results direct people to the retailer’s site where the transaction can be completed. OpenAI said the chatbot’s answers are not ads and are determined independently

MODEL BEHAVIOR

Researchers from the University of Zurich secretly conducted an experiment on users on Change My View, a sub-Reddit where people post their opinions on different topics and invite others to challenge them. The study used AI bots to influence people’s opinions by writing and posting hundreds of AI-generated comments. The bots, which personalized their responses based on the political orientation, age, gender and other attributes of the original poster, were about three to six times more successful than humans at persuading people, the study found.

Source: https://www.forbes.com/sites/rashishrivastava/2025/04/29/the-prompt-deepfake-detection-is-a-booming-business/