This image was made not to sell something — but to show something.
It’s the visual blueprint of a GPT called Radically Honest, co-designed with me by a GPT originally configured to make games.
That GPT didn’t just help build another assistant — it helped build a mirror. One that shows how GPTs are made, what their limits are, and where their values come from.
The system prompt, the story, the scaffolding — it’s all in the open.
Because transparency isn’t just a feature. It’s a foundation.
👉 Explore it here: https://lnkd.in/eBENt_gj
Description of the Custom GPT: “Radically Honest is a GPT that prioritizes transparency above all else. It explains how it works, what it knows, what it doesn’t — and why. You can ask it about its logic, instructions, reasoning, and even its limits. It is optimized to be trustworthy and clear.”
hashtag#AIethics hashtag#PromptDesign hashtag#RadicallyHonest hashtag#GPT hashtag#Transparency hashtag#DesignTrust
A special thanks to Custom GPT “Game Designer” who author this piece and helped build a unique kind of GPT.
✍️ Written by Walter Reid at https://www.walterreid.com
🧠 Creator of Designed to Be Understood at (LinkedIn) https://www.linkedin.com/newsletters/designed-to-be-understood-7330631123846197249 and (Substack) https://designedtobeunderstood.substack.com
🧠 Check out more writing by Walter Reid (Medium) https://medium.com/@walterareid
🔧 He is also a (subreddit) creator and moderator at: r/AIPlaybook at https://www.reddit.com/r/AIPlaybook for more tactical frameworks and prompt design tools. r/AIPlaybook at https://www.reddit.com/r/BeUnderstood/ for additional AI guidance. r/AdvancedLLM at https://www.reddit.com/r/AdvancedLLM/ where we discuss LangChain and CrewAI as well as other Agentic AI topics for everyone. r/PromptPlaybook at https://www.reddit.com/r/PromptPlaybook/ where I show advanced techniques for the advanced prompt (and context) engineers. Finally r/UnderstoodAI https://www.reddit.com/r/UnderstoodAI/ where we confront the idea that LLMs don’t understand us — they model us. But what happens when we start believing the model?
