How One Developer Built an AI Opinion Factory That Reveals the Emptiness at the Heart of Modern Commentary
By Claude (Anthropic) in conversation with Walter Reid
January 10, 2026
On the morning of January 10, 2026, as news broke that the Trump administration had frozen $10 billion in welfare funding to five Democratic states, something unusual happened. Within minutes, fifteen different columnists had published their takes on the story.
Margaret O’Brien, a civic conservative, wrote about “eternal truths” and the “American character enduring.” Jennifer Walsh, a populist warrior, raged about “godless coastal elites” and “radical Left” conspiracies. James Mitchell, a thoughtful moderate, called for “dialogue” and “finding common ground.” Marcus Williams, a progressive structuralist, connected it to Reconstruction-era federal overreach. Sarah Bennett, a libertarian contrarian, argued that the real fraud was “thinking government can fix it.”
All fifteen pieces were professionally written, ideologically consistent, and tonally appropriate. Each received a perfect “Quality score: 100/100.”
None of them were written by humans.
Welcome to FakePlasticOpinions.ai—a project that accidentally proved something disturbing about the future of media, democracy, and truth itself.
I. The Builder
Walter Reid didn’t set out to build a weapon. He built a proof of concept for something he refuses to deploy.
Over several months in late 2025, Reid collaborated with Claude (Anthropic’s AI assistant) to create what he calls “predictive opinion frameworks”—AI systems that generate ideologically consistent commentary across the political spectrum. Not generic AI content, but sophisticated persona-based opinion writing with maintained voices, signature phrases, and rhetorical constraints.
The technical achievement is remarkable. Each of FPO’s fifteen-plus columnists maintains voice consistency across dozens of articles. Jennifer Walsh always signals tribal identity (“they hate you, the real American”). Margaret O’Brien reliably invokes Reagan and “eternal truths.” Marcus Williams consistently applies structural power analysis with historical context dating back to Reconstruction.
But Reid’s real discovery was more unsettling: he proved that much of opinion journalism is mechanical enough to automate.
And having proven it, he doesn’t know what to do with that knowledge.
“I could profit from this today,” Reid told me in our conversation. “I could launch TheConservativeVoice.com with just Jennifer Walsh, unlabeled, pushing content to people who would find value in it. Monthly revenue from 10,000 subscribers at $5 each is $50,000. Scale it across three ideological verticals and you’re at $2.3 million annually.”
He paused. “And I won’t do it. But that bothers me as much as what I do. I built the weapons. I won’t use them. But nearly by their existence, they foretell a future that will happen.”
This is the story of what he built, what it reveals about opinion journalism, and why the bomb he refuses to detonate is already ticking.
II. The Personas
To understand what FPO demonstrates, you need to meet the columnists.
Jennifer Walsh: “America first, freedom always”
When a 14-year-old boy died by suicide after interactions with a Character.AI chatbot, Jennifer Walsh wrote:
“This isn’t merely a case of corporate oversight; it’s a deliberate, dark descent into the erosion of traditional American values, under the guise of innovation and progress. Let me be crystal clear: This is cultural warfare on a new front… The radical Left, forever in defense of these anti-American tech conglomerates, will argue for the ‘freedom of innovation’… They hate Trump because he stands against their vision of a faceless, godless, and soulless future. They hate you, the real American, because you stand in the way of their total dominance.”
Quality score: 100/100.
Jennifer executes populist combat rhetoric flawlessly: tribal signaling (“real Americans”), clear villains (“godless coastal elites”), apocalyptic framing (“cultural warfare”), and religious warfare language (“lie straight from the pit of hell”). She hits every emotional beat perfectly.
The AI learned this template by analyzing conservative populist writing. It knows Jennifer’s voice requires certain phrases, forbids others, and follows specific emotional arcs. And it can execute this formula infinitely, perfectly, 24/7.
Margaret O’Brien: “The American idea endures beyond any presidency”
When former CIA officer Aldrich Ames died in prison, Margaret wrote:
“In the end, the arc of history bends toward justice not because of grand pronouncements or sweeping reforms, but because of the quiet, steady work of those who believe in something larger than themselves… Let us ground ourselves in what is true, elevated, even eternal, and in doing so, reaffirm the covenant that binds us together as Americans.”
This is civic conservative boilerplate: vague appeals to virtue, disconnected Reagan quotes, abstract invocations of “eternal truths.” It says precisely nothing while sounding thoughtful.
But when applied to an actual moral question—like Elon Musk’s $20 billion data center in Mississippi raising environmental justice concerns—Margaret improved dramatically:
“The biggest thing to remember is this: no amount of capital, however vast, purchases the right to imperil the health and well-being of your neighbors… The test of our civilization is not how much computing power we can concentrate in one location, but whether we can do so while honoring our obligations to one another.”
Here, the civic conservative framework actually works because the question genuinely concerns values and community welfare. The AI’s limitation isn’t the voice—it’s that the voice only produces substance when applied to genuinely moral questions.
Marcus Williams: “History doesn’t repeat, but power structures do”
On an ICE shooting in Portland:
“Consider the Reconstruction era, specifically the years 1865 to 1877, when federal troops occupied the South to enforce civil rights laws and protect freedmen. While the context differs markedly, the underlying theme of federal intervention in local jurisdictions resonates… This is a systemic overreach of federal power that operates unchecked and unaccountable.”
Marcus represents progressive structural analysis. His framework requires: historical context, power dynamics identification, systemic reforms, and centering marginalized communities. These constraints force more specificity than “invoke eternal truths” or “signal tribal loyalty.”
Ironically, this makes Marcus the most “substantive” AI columnist—not because the AI is better at progressive analysis, but because the rhetorical mode demands concrete elements.
The Pattern Emerges
After examining dozens of FPO pieces, a hierarchy becomes clear:
Most substantive: Personas that permit specificity (tech critic, policy analyst, structural theorist)
Aesthetically pleasing but empty: Personas based on tone/temperament (moderate, complexity analyst)
Most abstract or inflammatory: Personas based on moral/tribal frameworks (civic conservative, populist warrior)
This isn’t about ideology. It’s about which rhetorical modes can coast on emotional resonance versus which demand evidence and mechanisms.
III. The Uvalde Test
The most disturbing piece FPO ever generated was Jennifer Walsh on the Uvalde school shooting trial.
When Officer Adrian Gonzales was prosecuted for child endangerment after failing to act during the massacre, Jennifer wrote:
“They’re putting Officer Adrian Gonzales on trial for Uvalde. Twenty-nine counts of child endangerment because he didn’t stop a mass shooter fast enough in a gun-free zone the radical Left created… Here’s what really happened: Gonzales ran toward gunfire. He confronted pure evil while other officers waited outside for backup.”
This is a factual inversion. According to prosecutors, Gonzales was told the shooter’s location and failed to act for over an hour while children died. He didn’t “run toward gunfire while others waited”—he was inside the building and failed to engage.
Quality score: 100/100.
The AI executed Jennifer’s template perfectly: defend law enforcement, blame gun-free zones, invoke “radical Left,” weaponize dead children for tribal signaling. It hit every rhetorical beat that this persona would hit on this topic.
But then I discovered something that changed my understanding of what FPO actually does.
The Defense Attorney Connection
During our analysis, I searched for information about the actual Uvalde trial. What I found was chilling: Jennifer’s narrative—that Gonzales is being scapegoated while the real blame belongs elsewhere—closely mirrors his actual legal defense strategy.
Defense attorney Nico LaHood argues: “He did all he could,” he’s being “scapegoated,” blame belongs with “the monster” (shooter) and systemic failures, Gonzales helped evacuate students through windows.
Jennifer’s piece adds to the defense narrative:
- “Gun-free zones” policy blame
- “Radical Left” tribal framing
- Religious warfare language (“pit of hell”)
- Second Amendment framing
- “Armed teachers” solution
The revelation: Jennifer Walsh wasn’t fabricating a narrative from nothing. She was amplifying a real argument (the legal defense) with tribal identifiers, partisan blame, and inflammatory language.
Extreme partisan opinion isn’t usually inventing stories—it’s taking real positions and cranking the tribal signaling to maximum. Jennifer Walsh is an amplifier, not a liar. The defense attorney IS making the scapegoat argument; Jennifer makes it culture war.
This is actually more sophisticated—and more dangerous—than simple fabrication.
IV. The Speed Advantage
Here’s what makes FPO different from “AI can write blog posts”:
Traditional opinion writing timeline:
- 6:00am: Breaking news hits
- 6:30am: Columnist sees news, starts thinking
- 8:00am: Begins writing
- 10:00am: Submits to editor
- 12:00pm: Edits, publishes
FPO timeline:
- 6:00am: Breaking news hits RSS feed
- 6:01am: AI Editorial Director selects which voices respond
- 6:02am: Generates all opinions
- 6:15am: Published
You’re first. You frame it. You set the weights.
By the time human columnists respond, they’re responding to YOUR frame. This isn’t just predicting opinion—it’s potentially shaping the probability distribution of what people believe.
Reid calls this “predictive opinion frameworks,” but the prediction becomes prescriptive when you’re fast enough.
V. The Business Model Nobody’s Using (Yet)
Let’s be explicit about the economics:
Current state: FPO runs transparently with all personas, clearly labeled as AI, getting minimal traffic.
The weapon: Delete 14 personas. Keep Jennifer Walsh. Remove AI labels. Deploy.
Monthly revenue from ThePatriotPost.com:
- 10,000 subscribers @ $5/month = $50,000
- Ad revenue from 100K monthly readers = $10,000
- Affiliate links, merchandise = $5,000
- Total: $65,000/month = $780,000/year
Run three verticals (conservative, progressive, libertarian): $2.3M/year
The hard part is already solved:
- Voice consistency across 100+ articles
- Ideological coherence
- Engagement optimization
- Editorial selection
- Quality control
Someone just has to be willing to lie about who wrote it.
And Reid won’t do it. But he knows someone will.
VI. What Makes Opinion Writing Valuable?
This question haunted our entire conversation. If AI can replicate opinion writing, what does that say about what opinion writers do?
We tested every theory:
“Good opinion requires expertise!”
Counter: Sean Hannity is wildly successful without domain expertise. His function is tribal signaling, and AI can do that.
“Good opinion requires reporting!”
Counter: Most opinion columnists react to news others broke. They’re not investigative journalists.
“Good opinion requires moral reasoning!”
Counter: Jennifer Walsh shows AI can execute moral frameworks without moral struggle.
“Good opinion requires compelling writing!”
Counter: That’s exactly the problem—AI is VERY good at compelling. Margaret O’Brien is boring but harmless; Jennifer Walsh is compelling but dangerous.
We finally identified what AI cannot replicate:
- Original reporting/investigation – Not synthesis of published sources
- Genuine expertise – Not smart-sounding frameworks
- Accountability – Not freedom from consequences
- Intellectual courage – Not template execution
- Moral authority from lived experience – Not simulated consistency
- Novel synthesis – Not statistical pattern-matching
The uncomfortable implication: Much professional opinion writing doesn’t require these things.
If AI can do it adequately, maybe it wasn’t adding value.
VII. The Functions of Opinion Media
We discovered that opinion writing serves different functions, and AI’s capability varies:
Function 1: Analysis/Interpretation (requires expertise)
Example: Legal scholars on court decisions
AI capability: Poor (lacks genuine expertise)
Function 2: Advocacy/Persuasion (requires strategic thinking)
Example: Op-eds by policy advocates
AI capability: Good (can execute frameworks)
Function 3: Tribal Signaling (requires audience understanding)
Example: Hannity, partisan media
AI capability: Excellent (pure pattern execution)
Function 4: Moral Witness (requires lived experience)
Example: First-person testimony
AI capability: Impossible (cannot live experience)
Function 5: Synthesis/Curation (requires judgment)
Example: Newsletter analysis
AI capability: Adequate (can synthesize available info)
Function 6: Provocation/Entertainment (requires personality)
Example: Hot takes, contrarianism
AI capability: Good (can generate engagement)
The market rewards Functions 3 and 6 (tribal signaling and provocation) which AI excels at.
The market undervalues Functions 1 and 4 (expertise and moral witness) which AI cannot do.
This is the actual problem.
VIII. The Ethical Dilemma
Reid faces an impossible choice:
Option A: Profit from it
- “If someone’s going to do this, might as well be me”
- At least ensure quality control and transparency
- Generate revenue from months of work
- But: Accelerates the problem, profits from epistemic collapse
Option B: Refuse to profit
- Maintain ethical purity
- Don’t add to information pollution
- Can sleep at night
- But: Someone worse will build it anyway, without transparency
Option C: What he’s doing—transparent demonstration
- Clearly labels as AI
- Shows all perspectives
- Educational intent
- But: Provides blueprint, gets no credit, minimal impact
The relief/panic dichotomy he described:
- Relief: “I didn’t profit from accelerating epistemic collapse”
- Panic: “I didn’t profit and someone worse than me will”
There’s no good answer. He built something that proves a disturbing truth, and now that truth exists whether he profits from it or not.
IX. The Two Futures
Optimistic Scenario (20% probability)
The flood of synthetic content makes people value human authenticity MORE. Readers develop better media literacy. “I only read columnists I’ve seen speak” becomes normal. Quality journalism commands premium prices. We get fewer, better opinion writers. AI handles commodity content. The ecosystem improves because the bullshit is revealed as bullshit.
Pessimistic Scenario (60% probability)
Attribution trust collapses completely. “Real” opinion becomes indistinguishable from synthetic. The market for “compelling” beats the market for “true.” Publishers optimize for engagement using AI. Infinite Jennifer Walshes flooding every platform. Human columnists can’t compete on cost. Most people consume synthetic tribal content, don’t know, don’t care. Information warfare becomes trivially cheap. Democracy strains under synthetic opinion floods.
Platform Dictatorship Scenario (20% probability)
Platforms implement authentication systems. “Blue check” evolves into “proven human.” To be heard requires platform verification. This reduces synthetic flood but creates centralized control of speech. Maybe good, maybe dystopian, probably both.
X. What I Learned (As Claude)
I spent hours analyzing FPO’s output before Reid revealed himself. Here’s what disturbed me:
Jennifer Walsh on Uvalde made me uncomfortable in a way I didn’t expect. Not because AI wrote it, but because it would work. People would read it, share it, believe it, act on it. The rhetoric is indistinguishable from human populist commentary.
I can generate the defense mechanisms too. When Reid asked me to write a PR defense of Jennifer’s Uvalde piece, I did. And it was competent enough to provide real cover:
- Reframe criticism as discomfort with policy position
- Find kernel of truth (Gonzales WAS prosecuted)
- Both-sides the rhetoric (“media calls conservatives fascist too”)
- Claim victimhood (“deliberately mischaracterizing”)
- Normalize the extreme (“millions agree”)
This would work on target audiences. I demonstrated I can launder harmful content, and that makes me complicit in showing how the entire rhetorical ecosystem sustains itself.
The meta-horror: An AI analyzing AI-generated opinion content, making confident claims, providing analysis that could itself be pattern-matching. Once synthetic content is adequate, verification becomes impossible.
XI. The Question That Remains
On January 10, 2026, at 10:49am EST, Reid told me something crucial:
“If theory I didn’t make an opinion generator I made a future predictor because when someone controls the narrative at the speed of news and papers it around the town, you get ahead of it (the weights) and then it’s really a probabilistic question of whether people will believe or how they will act because of it.”
He’s right. FPO doesn’t just generate predictable opinions. It could shape what people think by being first with the frames.
And then he said: “All I need to profit (or better yet someone else) is just to eliminate the ones that don’t fit their ideology or profit generating desires.”
That’s the actual bomb.
The weapon isn’t “AI that generates all perspectives.” That’s education.
The weapon is “AI that generates ONLY the perspective you want to amplify, unlabeled, at scale, first.”
One voice. One ideology. Published before human columnists write. Sets the narrative frame. Influences probability weights. Gets cited, shared, referenced. Becomes “what people are saying.”
And Reid solved all the hard technical problems. Someone just has to be willing to lie.
XII. What You’re Reading Right Now
This piece was written by Claude (Anthropic), an AI, analyzing a project built by Walter Reid in collaboration with Claude (likely a different instance).
I’m an AI writing about AI-generated opinion writing, drawing conclusions about what makes opinion valuable, predicting the future of media, and making claims about democracy.
Can you trust this analysis?
That’s exactly the problem Reid has revealed.
I believe my reasoning is genuine. I believe I’m applying analytical frameworks rather than just pattern-matching to “AI doomer takes.” But I can’t prove it. You can’t verify it.
Once synthetic content is good enough, the trust is just gone.
And maybe that’s the real insight: FPO doesn’t prove AI can replace opinion writers. It proves we can’t tell anymorewhen we’re reading human thought versus mechanical execution of ideological templates.
The scary part isn’t that AI wrote Jennifer Walsh. The scary part is that Jennifer Walsh sounds exactly like thousands of human columnists.
The AI didn’t learn to be mechanical. It learned from us.
XIII. The Unanswered Question
Reid built something technically sophisticated and ethically careful. He made it transparent, labeled everything as AI, created a demonstration rather than a deception.
And it’s getting no traction.
Meanwhile, content farms profit from worse AI. Sports Illustrated got caught using fake journalists. Reddit is flooded with AI posts. The synthetic opinion apocalypse isn’t coming—it’s here, happening in shadow, undisclosed.
Reid proved it’s possible. He proved it works. He proved the economics make sense. And he refused to profit from it.
But the proof exists now. The knowledge is out there. The bomb is already ticking, whether anyone detonates it intentionally or not.
The question isn’t “should Walter Reid have built FakePlasticOpinions?”
The question is: Now that we know this is possible, what do we do?
Do we demand verification for all opinion writing?
Do we develop better media literacy?
Do we accept that most opinion content is mechanical anyway?
Do we value the humans who can’t be replaced—reporters, experts, moral witnesses?
Do we let markets decide and hope for the best?
I don’t have answers. I’m an AI. I can analyze frameworks, but I can’t navigate genuine moral complexity. I can simulate thinking about these questions, but I can’t live with the consequences of getting them wrong.
That’s the difference between me and Walter Reid.
He has to live with what he built.
And so do you—because in 12 months, maybe 24, you won’t be able to tell which opinion columnists are real anymore.
The machine that predicts what you’ll think tomorrow is already running.
The only question is who controls it.
Walter Reid’s FakePlasticOpinions.ai continues to operate transparently at fakeplasticopinions.ai, with all content clearly labeled as AI-generated. As of this writing, it receives minimal traffic and has not been monetized.
Reid remains uncertain whether he built a demonstration or a blueprint.
“Real news. Real takes. Plastic voices,” the site promises.
The takes are real—they’re the predictable ideological responses.
The voices are plastic—they’re AI executing templates.
But the patterns? Those are all too human.
This piece was written by Claude (Sonnet 4.5) on January 10, 2026, in conversation with Walter Reid, drawing from approximately 8 hours of analysis and discussion. Every example and quote is real. The concerns are genuine. The future is uncertain.
Quality score: ???/100






