AI Demand-Shaping And The Frictionless Rub Of Solipsistic Efficiency

In 1897, painter Frederic Remington wired New York Journal publisher William Randolph Hearst from Cuba with bad news. There was nothing to see, no war to illustrate. Hearst’s infamous reply: “You furnish the pictures, and I’ll furnish the war.” The apocryphal anecdote endures as a cautionary tale of media’s power to shape reality to its owners’ interests.

Broadly speaking, historians agree that the sensationalist reporting of Spanish atrocities in Cuba and the mysterious sinking of the USS Maine, which typified the Yellow Journalism era, contributed to the U.S. decision to enter the Spanish-American War in 1898. Hearst and other publishers, like Joseph Pulitzer, saw circulation spikes from their vivid, lurid, and constant coverage, facilitated by new technologies that brought battlefield color to readers at telegraphic speed. Narrative precedes truth. Sensation succeeds substance.

Today, emerging feedback loops echo Hearst’s telegram, with campaigns to shape consumer demand through prescriptive analytics and generative AI. These are mostly tolerated when used for dynamic ticket pricing and for brand lore development, less so in propaganda campaigns. But what if the demand were being created before there was a product?

In late May the Chicago Sun-Times and the Philadelphia Inquirer published “Heat Index” a “best of summer” guide insert with, among other fun tips, book recommendations. Those recommendations included reviews and plot summaries, as might be expected from such a feature. The problem, which readers discovered when they sought out their beach reading, was that some of the books did not exist!

Chicago freelancer Marco Buscaglia admitting using AI to create the “Heat Index” book reviews and to not checking against hallucinations. He was working for King Features Syndicate, a unit of Hearst—yes, that Hearst—which apparently did not fact check the recommendations. Neither did the newspapers that published them.

Play Puzzles & Games on Forbes

Both the Sun-Times and Inquirer have since retracted and apologized for the list, but not before it had circulated widely creating interest, clicks, and even demand for books that no one wrote. Until they did. Since the “Heat Index” publication, dozens of versions of its fake books have been published and sold through Amazon.

The sloppy journalism portends a cost-effective, less-human creative process in the not too distant future; one that speaks to dystopian fears around AI. Here’s a modification to the “Heat Index” story (*only 1 and 5 were added):

  1. Prescriptive analytics are used to target audiences through LLM social media initiatives creating buzz around a topic, theme, story, “author.”*
  2. AI writes “authentic-sounding” book reviews and publishes them through news outlets so starved for content and/or too short on editorial staff to verify.
  3. Readers, trusting the platforms, seek out the content.
  4. Depending on interest and engagement, AI is used to rapidly write and publish the content to meet demand while promoting the content and “creators” through social channels.
  5. The cycle repeats, adapting to what was learned in previous campaigns.*

I call this solipsistic efficiency, a media logic where content generates its own demand, based on individualized tastes, in a closed loop, detached from real authors, real experiences, and external verification. It’s not about deception in the traditional sense. It’s about removing the inefficiencies of reality to create a perpetual, self-driving consumer experience in which authenticity exists only as a marketing metric.

The Real Real Thing?

In such a media ecosystem, the uncertainty about what’s real becomes a valuable hook. Remember James Frey’s A Million Little Pieces? His “memoir” sold better after being exposed as fabricated. The author admitted as much in a Vanity Fair interview discussing his new book, which (spoiler) he used AI in part to craft.

The thrill of maybe-it’s-real, maybe-it’s-not becomes a form of marketable mystique. We see this across contemporary culture: One of this season’s hottest Apple TV+ shows, The Studio, brings viewers inside an uncanny Hollywood featuring actual A-listers playing caricatures of themselves. The show captured 23 Emmy nominations (including five of the six for Guest Actor in a Comedy Series) as well as public fascination for an industry of smoke, mirrors, and greed. The fascination is driven by the tantalizing question: Is that what Hollywood is really like? How real were Martin Scorsese and Ron Howard’s performances?

In music, French streaming platform Deezer estimates that AI-generated music accounts for 18% of all uploads. This July, The Guardian ran the headline “An AI-generated band got 1m plays on Spotify.” Where is the line between artistic merit and metadata optimization? It’s gotten to the point where The Atlantic’s Ian Bogost recently wrote a screed titled “Nobody Cares If Music Is Real Anymore.” Everyone I’ve asked says they still do care, provided they know. But if you don’t know it’s fake, then does it actually matter?

For several months the Australian Radio Network featured (without telling listeners at first) an AI DJ named “Thy” (pronounced “Tee”) across several of its stations. NBC Sports recently unveiled an AI-voiced narrator for NBA games, modeled after Jim Fagan, the deceased, hall of fame voice nostalgically familiar to anyone who watched games in the 1990s. Audacy sports talk radio host James Seltzer (WIP 94.1 FM) characterized the trend as professionally problematic, during a recent on-air broadcast, while acknowledging such tech will be difficult to prevent.

AI-driven demand and the opacification of reality are dominating media narratives and perplexing media scholars. This summer, Maggie Harrison Dupré of Futurist reported that “USA TODAY is publishing automated sports stories that serve as SEO-targeted vehicles for sports gambling ads, toeing ethical lines and blurring the boundaries between sports journalism and the rapidly growing sports betting industry.” The demand to bet on a game may be shaped by AI-generated coverage of it, and with platforms like ESPN, earning from both its news content and its sports book (ESPN BET), such lines are at best questionable.

We all know it’s here and rapidly advancing, but even experts are perplexed about what to do. “We can’t fight it, and we’d be crazy to try to,” Rowan University Journalism Professor Carl Hausman told me. He stresses the importance of media literacy in education, “so we don’t end up hallucinating ourselves to death.”

Last week I attended the 108th Association for Educators in Journalism & Mass Communication (AEJMC), an academic conference for national and international media scholars and practitioners. “AI” was the talk of our four-day, San Francisco conference, appearing at least 273 times in the 246 page program. Research presentations, panels, working groups, and late night discussions unpacked a range of AI fears and fantasies with many practical and bounded conversations about the future of journalism and how to use AI and generative engine optimization (GMO) to improve curricula.

Chasing Efficiency or Losing Humanity?

Even its most ardent detractors admit artificial intelligence is increasing efficiency and saving money. From decisions about crop management to gene therapy to real-time translation for financial news wires AI is “paradigm-shifting technology akin to the internet,” according to Ari Moskowitz, Content Marketing Director at Conviva, a leading platform for real-time performance analytics of apps, streaming platforms, and AI agents.

Since the wheel, and probably before it, our human drive to get more with less effort is why we create technology. But for some, the consequences of AI signal a systemic, dehumanizing transformation, where generative systems mimic not just content, but the entire ecosystem: creator, reviewer, performer, promoter, consumer. Cultural artifacts now exist because someone made them and/or because algorithms detected a space where they should exist and then filled it.

Even the once-stalwart security of a computer science degree is reportedly under assault, with AI replacing entry-level coders. Such efficiency may be a goal of Open AI and ventures like Meta’s Superintelligence Labs. CEO Mark Zuckerberg is reportedly paying record salaries to poach engineers who will “fast-track work on machines that could outthink humans on many tasks.” In a world where $100 million AI engineers are prompting the future of our social, cultural, and professional experience, the trajectory appears to be solipsistic efficiency: a frictionless, perpetual system tailored to create and satisfy our individual wants before we knew we had them.

Even a $100,000-per-year engineer, who may now be out of work, would tell you that such a concept violates the Second Law of Thermodynamics. After all, energy cannot be created from nothing, and systems trend toward entropy. But solipsistic efficiency simulates energy through perception. You feel like something was created. Time was spent. Meaning was produced. But in reality, the loop simply confected noise based on prescriptive analytics into a temporarily convincing form shaped to some strategic, synthetic engagement protocols.

The extent and time-horizon for such an existential shift will depend not only on AI’s advancement, but human choices about we value and who we are. The latter variable is confounded as AI-generated content saturates the mediascape and feeds it back into what we consume. According to science fiction author Storm Humbert “AI was engineered to solve a problem: shifting creativity to wealth while shifting wealth away from creators.” There’s a kind of cultural entropy inversion at work: the more content we generate through closed AI loops, the less value it contains. Can this process increase understanding, connection, originality or is it just more frictionless production, more viral polish?

On one hand authenticity has never been more prized. Because of that though the suspicion of inauthenticity becomes part of the draw, like a world reoriented to the ontology of professional wrestling. Is that book real? Is that DJ human? Did AOC really say that? When every artifact can be faked, doubt itself becomes a form of engagement. We click to solve a mystery that grows harder to solve by every click.

To be clear, this isn’t a Luddite argument against technology. AI has real potential in augmenting creativity, accessibility, and speed. I’ve used it to help organize, shape, caption, and optimize this article (per Forbes guidelines). But the feedback loops it can create—especially when paired with platform incentives and weak editorial oversight—risk replacing meaning with momentum.

So what’s the solution?

We need friction and provenance, to put it bluntly. We need platforms and publishers to invest in verifiability and authorial transparency, and to reward editorial standards as well as GEO and SEO. We need algorithms that foster human connection, not just predictive profitability. And we need cultural gatekeepers—critics, educators, and institutions—to ask not just is it engaging, but is it real? And why does it matter if it’s not?

Friction and inefficiency are, in some ways, what make us human. Instant transportation between destinations means sacrificing the journey. What would The Canterbury Tales be if the pilgrimage from the Tabard Inn to the Shrine of Thomas Becket were instantaneous? Solipsistic efficiency doesn’t directly violate physics. But it violates our ability to know what’s real, what’s worth preserving, and what, if anything, actually happened. Once the guardrails of reality are gone, the laws of physics no longer exist, at least not to our perception.

The Hearst telegram was about manufacturing war. These new loops manufacture demand, legitimacy, and cultural weight—not because of what the content says, but because of how it was engineered a priori. You furnish the engagement; I’ll furnish the reality.

Source: https://www.forbes.com/sites/emilsteiner/2025/08/14/ai-demand-shaping-and-the-frictionless-rub-of-solipsistic-efficiency/