Welcome back to The Prompt,
Executives at OpenAI are increasingly convinced that some of its loudest critics may be backed by its billionaire rivals like xAI CEO Elon Musk, Meta’s Mark Zuckerberg and Facebook cofounder Dustin Moskovitz. In recent months, the company has filed complaints against AI safety and governance-focused nonprofits like Encode and the Coalition for AI Nonprofit Integrity (CANI), which have opposed OpenAI’s conversion into a for-profit company. OpenAI has subpoenaed them to turn over documents related to Musk or Zuckerberg’s involvement in their founding, The San Francisco Standard reported.
Founders at these advocacy nonprofits have said they have not received any money from billionaires like Musk or Zuckerberg, insisting that their mission is to ensure that AI technology isn’t misused. However, there are some connections between Encode and Musk: Encode filed an amicus brief in Musk’s lawsuit against OpenAI and it has received money from Future of Life Institute, where Musk is an advisor. This underscores the intertwined relationships between AI safety nonprofits, frontier AI labs and some of the most powerful entities in AI.
Now let’s get into the headlines.
BIG PLAYS
OpenAI announced today that it plans to add safety controls to ChatGPT that would allow parents to control how ChatGPT responds to their teenagers and send notifications if there are signs the teen is in “acute distress.” The company said it is also partnering with mental health experts as well as a network of 250 physicians (including psychiatrists and general practitioners) across 60 countries to help train models so that they are safer for teens.
These moves come on the heels of multiple incidents involving the mental health of ChatGPT users, including a wrongful death lawsuit filed against OpenAI by the family of a California teenager who allege ChatGPT encouraged and assisted him to commit suicide. A mentally distressed man’s conversations with ChatGPT fueled his paranoia that he was being surveilled and in early August, he killed his mother and himself in their Connecticut home, the Wall Street Journal reported.
“These are not tricky situations in need of a product tweak—they are a fundamental problem with ChatGPT,” said Jay Edelson, lead counsel on the wrongful death lawsuit.
TALENT RESHUFFLE
Apple’s lead AI researcher for robotics, Jian Zhang, left the company to join Meta’s robotics studio, Bloomberg reported. Three more AI researchers left the iPhone maker’s internal AI team, with two going to OpenAI and one leaving for Anthropic. Meanwhile, Meta is already losing some of the newest additions to its AI lab, TBD (short for “To Be Determined.”) Avi Verma, a researcher Meta poached from OpenAI and went through the onboarding process, reportedly never showed up for his first day.
AI DEAL OF THE WEEK
Claude creator Anthropic raised $13 billion in Series F funding at a $183 billion valuation, making it one of the most valuable startups in the world. The investment was co-led by ICONIQ. Fidelity Management and Research Company and Lightspeed Venture Partners. Anthorpic’s products have seen an explosion in demand over the past few months. The company’s annual run rate revenue increased from $1 billion at the year’s start to $5 billion as of August 2025. Claude Code, Anthropic’s programming assistant, has generated $500 million in annualized revenue by itself. Founded by siblings Dario and Daniela Amodei along with a group of ex-OpenAI employees in 2021, Anthropic has differentiated itself both by focusing on safety and reliability and by targeting enterprise users. It counts some 300,000 businesses as customers— a number that has grown 7x in the past year.
The startup also recently published a report on how cybercriminals have used its powerful AI systems to target, exploit and extort enterprises and people. In one recent example, a cybercriminal used Claude Code to “vibe hack” 17 different organizations, threatening to expose their private data in exchange for ransom. North Korean IT workers have also used its technology to spin up false profiles, complete coding tests and get remote tech jobs at U.S. companies.
DEEP DIVE
On a rainy July morning in a plush Amsterdam suburb, Nathan Xu has camped out at an Italian coffee shop for a full slate of meetings. Smiling, he asks if he can record our conversation and clips a slim memory stick-sized device to his shirt.
With just a click, the pill-shaped gadget starts recording, transcribing and summarizing everything he says — and everything anyone around him says too. The device, made by Xu’s San Francisco and Shenzen, China-based startup Plaud, can on a single charge capture 20 hours of recordings, turning that into searchable transcripts by connecting its microphones with Plaud’s own software and a bundle of AI tools like ChatGPT.
Dubbed the NotePin, the gadget has found a fast growing audience. Since launching in 2023, Xu has sold over 1 million such devices to doctors, lawyers, and other overworked folks with long days and short memories. Unlike many AI companies Plaud not only makes money — it’s profitable. Between sales of the $159 NotePin and revenue from annual transcription plans starting at $99, the company is on track to bring in $250 million in annualized revenue this year with Xu bragging about margins on par with Apple’s 25% on every iPhone sold.
That makes Plaud an early front-runner in the race to move artificial intelligence tools out of your phone or laptop and onto your body. Xu’s team has already lapped some early American competition like Rabbit and the now defunct Humane that promised an AI-powered helper but delivered costly duds. Investors have plowed close to $350 million in the space with a new crop of startups like Omi and Limitless releasing wearables, while Amazon just snapped up Bee, a tiny note-taking device startup, for an undisclosed amount. In May OpenAI spent a stunning $6.4 billion to bring iPhone designer Jony Ive’s future AI device inhouse.
And unlike its rivals, Plaud has done it without handouts from venture capitalists. Xu, 34, bootstrapped the company by pooling his savings with his older cofounder Charles Liu, a Shenzhen factory owner, and launching a $1 million crowdfunding campaign.
Read the full story on Forbes.
MODEL BEHAVIOR
An AI-driven education service called Alpha School requires students to study for only two hours a day via personalized lesson plans. Students are encouraged to use the rest of their time learning real world skills like financial literacy or bicycle riding. The school, which is coming to Northern Virginia this fall, plans to enroll 25 students in grades K-3 at a price of $65,000 per student per year. At Alpha School, an “AI tutor” helps guide the students through lessons instead of a teacher.
Source: https://www.forbes.com/sites/rashishrivastava/2025/09/02/the-prompt-why-openai-is-subpoenaing-ai-nonprofits/