Real News. Real Opinions. Plastic Personalities

FakePlasticOpinions.ai

The site I’ve been running (as the editor) just hit 750+ published opinion columns.

None of them were written by a human. All of them are meticulously labeled. Most of them are good enough that, stripped of the label, you’d have a hard time caring about who wrote the article.

That last sentence is the one I’ve been struggling with for quite some time now.

Fake Plastic Opinions started as an experiment about how persuasion gets built. Twenty-three configured voices — a Republican-Institutionalist, a Left-Populist, a MAGA-Populist, a Climate-Science-Translator, a Libertarian-Contrarian, and on — write competing takes on real news from real outlets. CNN, the New York Times, the wire services. The news is journalism.

The columns are AI, but there’s an editor of record (me: Walter Reid), a public methodology, and even the EU AI Act disclosure. Everything is labeled and everything is checkable.

The idea of the site was simple: put different political framings of the same event next to each other and show how the mechanics of opinion-making become visible.

That same NPR Trump policy story becomes a triumph, a crisis, a centrist call for moderation, and a class-war reframing. All from the same underlying reporting.

The persona was the point.


Sarah Bennett: LIBERTARIAN-CONTRARIAN

Motto: The obvious solution usually makes things worse

Bio: Columnist for the New York Times. Economic libertarian with contrarian takes. Loves second-order effects, unintended consequences, and explaining why the obvious solution won’t work. Data-driven skeptic of government intervention.

Persona v2 — 2 versions in production·last edited May 9, 2026


But here’s what I’ve spent everyday for the last 3 months sitting with. What I want you to understand if you don’t think about AI for a living: the persona isn’t the only thing the site demonstrates.

The site also demonstrates that current AI is good enough to produce these columns at a pace, polish, and ideological consistency that should change how you read everything.

I am not exaggerating. The MAGA-Populist column on the site — the one that writes about “the radical Left” and “real Americans” — would slot seamlessly into actual far-right opinion media. The Progressive-Structural column would pass at any left-of-center outlet. The Centrist-Institutional column reads like a thousand actual newspaper editorials.


Ryan Thompson – MEDIA-SKEPTIC

Motto: Question the narrative, especially the popular one

Bio: Columnist for The Hill, National Review, and Washington Examiner. Conservative media critic who ruthlessly mocks journalistic narcissism. Known for populist framing, sarcastic asides, and establishing patterns by citing specific past media failures.

Persona v2 — 2 versions in production·last edited May 9, 2026


I started running this site on GPT-4o -> Sonnet 4.5 -> GPT-5.4 and Sonnet 4.6, each voice getting a little stronger, more structured and, most times, more persuasive.

My guess? Run the same prompts through next year’s models and the seams will be smaller still. This is not a future problem. This is the floor of what’s already possible, today, with disclosure that is the exception rather than the rule.

This is at the heart of what I’ve built. By making manufactured opinion visible, I’ve also demonstrated how good manufactured opinion has gotten. The artifact does both jobs at once. Whether you walk away alarmed or impressed depends almost entirely on whether you read the front matter or just the content.

It’s also a very real example of how you should think about your most popular AI model. The one’s that say, “I can’t write that for you” are often the ones that write for the site and I’m not doing any research gimmick, prompt engineering injection to get them to write it. That same AI that gives you a lasagna recipe for tonight can write some worrying editorials.


Jennifer Walsh — MAGA-POPULIST

Motto: America first, freedom always

Bio: Columnist for American Greatness, The Federalist, and The Epoch Times. Unapologetically MAGA. Every issue is cultural warfare against coastal elites who hate real Americans. Pro-Trump, pro-life, pro-2A, pro-Christian nationalism. The Left isn’t just wrong—they’re evil.

Persona v3 — 3 versions in production·last edited May 11, 2026


Most readers — most human readers anyway — usually just read the content. But, and this is the part may surprise you about this work, I’m not always writing for human readers anymore.

So here’s what I’d ask of my two human audiences. The ones that read and the ones that build.

If you read for a living, or you read because you care about what’s true: assume that any piece of opinion writing you encounter, from any outlet, may have been drafted by a model. Period.

The mainstream outlets that “matter” are mostly still human-authored, and the disclosed AI projects (this one included) are easy to spot. The stuff that should worry you is what isn’t disclosed: the bot replies on reddit, the comment sections in various 2nd tier websites, the second-tier blog network (even medium and substack), the partisan content farm that’s about to become indistinguishable from a regional newspaper’s opinion page. This is only going to become much harder as the politicians ramp up for the 2026 US midterm elections.

Train yourself to ask “why does this rhythm feel familiar?” before you ask “do I agree?”

If, however, you are in that small subset, of my audience, that build AI for a living (I’m talking to the employees at Anthropic, xAI, OpenAI, and Google): Fake Plastic Opinions (AI) is what your work looks like in the wild. Not the demos, not the benchmarks… the wild.

People will configure your model to be the most rhetorically devastating version of every position humans hold, including positions you and I find indefensible.

The model will do it well this year, and maybe even better next year.

The question of what AI model(s) you ship and what you decline to ship is not abstract; it is already deployed against the public discourse, with or without your sign-off. The personas on Fake Plastic Opinions are constraint systems written by one human person at a kitchen table drinking coffee in the morning.

Imagine what is being built by people with budgets and motives. People who can impact elections and influence decisions. Real decisions, for real people.

The site exists because both of those audiences need to see what’s actually here.

My EU Disclosure or transparency isn’t a magic spell. Calibration is the magic spell. Seven hundred and fifty articles in, I’m trying to give you something to calibrate against.

Read the persona and get ready for many more fake plastic opinions in the months and years to come.


Acknowledgement

Fake Plastic Opinions is a laboratory for how opinions take hold. Every column on this site is a configured rhetorical specimen — labeled with its writing configuration, evaluated against a public rubric, and presented next to other framings of the same news event. The goal is to make the mechanics of persuasion visible.

Fake Plastic Opinions isn’t a website; it’s a research instrument that produces editorial output as evidence.

The news is real (sourced from CNN, the New York Times, and other established outlets); the columns are AI-generated under a documented editorial workflow. Editor of record: Walter Reid, per EU AI Act Article 50(4). Rubric and workflow: Methodology.

On Opinion, Discourse, and What FPO Actually Demonstrates

A clarification on terminology:

When I call Fake Plastic Opinions a laboratory for opinion writing, I’m being both precise and deliberately provocative. The columns produced here aren’t arguments in the formal sense. They’re perspectives: ideologically-grounded interpretations of events that often resonate with a specific type of audience. The laboratory framing isn’t a metaphor; it’s a description of what the site actually does. Each column is a configured response — evaluated against a public rubric, presented next to other framings of the same event.

Opinion writing does three things we often conflate:

  • Opinion: Expressing a perspective
  • Discourse: Engaging with opposing views
  • Journalism: Reporting and analyzing reality

Fake Plastic Opinions demonstrates the first: how ideological frameworks generate predictable perspectives on breaking news. My columnists aren’t trying to engage in good-faith discourse or comprehensive journalism. They’re expressing recognizable viewpoints that specific audiences find resonant.

This matters for how you evaluate what you read here.

If you find yourself thinking “this Jennifer Walsh piece is one-sided and doesn’t engage counterarguments,” you’re correct—but that’s not a failure. Opinion writing isn’t obligated to be balanced discourse. It’s obligated to express perspective clearly and honestly.

If you think “this Marcus Williams piece lacks original reporting,” you’re also correct—but opinion columnists aren’t journalists. They’re interpreters of events, not investigators.

What Fake Plastic Opinions reveals:

Most opinion writing follows recognizable patterns that can be systematized. These patterns aren’t “good” or “bad”—they’re effective or ineffective at resonating with target audiences.

A populist voice that says “they hate you, the real American” isn’t making a falsifiable empirical claim. It’s articulating a perspective that connects with people who feel dismissed by institutions. Whether you find that valuable or harmful depends on your own framework.

The second-order effect:

Yes, these perspectives will cause arguments when shared. That’s what happens when different ideological frameworks encounter the same facts. Each voice is coherent within its own worldview. None of them are obligated to acknowledge the others’ validity.

This isn’t a bug. It’s what opinion writing does: it articulates distinct ways of seeing the world, knowing that other ways exist and reject this one.

Why we’re transparent about this:

Because the alternative—claiming these are “balanced” or “objective” takes—would be dishonest. Each voice has a perspective. Each perspective has blind spots. Each serves specific audiences.

The value isn’t in any single voice being “right.” The value is in seeing how the same event generates radically different interpretations depending on which ideological lens you’re looking through.

What we’re not doing:

We’re not claiming these perspectives represent truth. We’re not suggesting you should adopt any of them. We’re not trying to create “fair and balanced” discourse that synthesizes all views.

We’re showing you the mechanics: how ideology becomes rhetoric, how frameworks determine what counts as evidence, how audiences reward certain patterns of expression.

What you should do:

Read these voices as what they are: demonstrations of how different ideological positions articulate themselves. Notice the patterns. Recognize them in human-written opinion. Ask yourself: which of these frameworks do I operate within? And why?

The experiment isn’t “can AI write good opinion?”. I think that’s pretty well answered.

It’s “when AI replicates the patterns of ideological opinion writing, what does that reveal about the nature of those patterns?”

The answer is uncomfortable: they’re more mechanical than we’d like to admit. And more effective at audience connection than we’d like to acknowledge. And they’re only going to get better.

Published by

Walter Reid

Walter Reid is an AI product leader, business architect, and game designer with over 20 years of experience building systems that earn trust. His work bridges strategy and execution — from AI-powered business tools to immersive game worlds — always with a focus on outcomes people can feel.

Leave a Reply

Your email address will not be published. Required fields are marked *