Artificial Intelligence Is Too Important To Be Policed By AI

“If you’re the police, who will police the police?” That’s what Lisa Simpson asked Homer in an episode from long ago. Fortunately for viewers, The Simpsons is a comedy as was Homer’s creation of his vigilante group.

Still, the question posed by Lisa rates serious thought as Artificial Intelligence (AI) evolves from curiosity to a fact of life. Questions arise about AI guardrails. Exactly because it will do, and crucially think for us, it’s essential that AI be policed.

Those creating the AI future don’t dispute the importance of human oversight. For AI to achieve its limitless potential, there must be trust in it from the same human population that has given it life.

Which requires a brief digression, but a useful one to Nvidia co-founder and chief executive officer Jensen Huang. Huang has long cautioned those fearful of AI taking on a life of its own by reminding them that AI is decidedly not autonomous, that “All it’s doing is processing data.”

Yes, AI is ultimately machines. Crucially, those machines can and will be turned on and off by humans. Which is the point.

The public needs to know there are humans at the proverbial AI controls. Which explains the approach taken to its ever-evolving technology by OpenAI. There’s a recognition that what gives AI potential is what similarly requires that it be watched. Translated, OpenAI employs a growing army of humans to review inputs and outputs, and that’s always looking for crimes to report.

Notable here is that this human input has already proven successful. Data from the National Center for Missing & Exploited Children (NCMEC) indicates that OpenAI’s technology “catches the bad guys,” with over 75,000 NCMEC reports in the first half of 2025 alone. It speaks to the remarkable potential of AI.

As opposed to replacing human effort, AI amplifies it. AI does the essential work of processing enormous amounts of data, only for humans rendered exponentially more productive by AI “work” to more rapidly and accurately find the bad guys.

That OpenAI employs an expensive human army to enhance the safety of its technology calls into question Anthropic’s opposite approach. It has crafted “Constitutional AI” whereby the police for Anthropic’s machines are machines themselves.

Anthropic’s argument is that it has taught the machines right from wrong by training them with a “constitution” written by Anthropic employees. Partisans of “Constitutional AI” tell us the machines have been trained to not just root out good and bad, but to also refuse to answer sensitive questions of the “how to build a bomb” variety

The problems with the low-cost, labor-free approach of Anthropic are many. For one, it’s not as effective at finding the bad guys as evidenced by a paltry 5,005 NCMEC reports versus OpenAI’s 75,000 in the same year. In fact, Anthropic was among 17 companies – alongside platforms like Grindr, Redgifs.com, and Pornhub – that received notice from NCMEC that their reports were too sparse for the agency to act on. Second, the automation of the policing function deprives AI of what makes it so useful: the amplification of human genius paired with machines. Third, there remains the problem of worry within the broad population of AI taking on a life of its own, only for Anthropic to keep costs down by automating an essential human function.

Which means Anthropic’s low-cost approach to the policing of AI is a problem for the whole AI ecosystem. Anthropic is asking a still skeptical public to trust machines watching over machines, but it’s machines lacking human oversight that have the public worried. Lisa Simpson would understand.

Source: https://www.forbes.com/sites/johntamny/2026/02/25/artificial-intelligence-is-too-important-to-be-policed-by-ai/