Expert Prompting and the MYTH about AI Consulting

The biggest myth in AI consulting? That typing “Act as a strategist. Create a SWOT.” is the same as delivering a strategy.

It’s not. That’s just reading the dashboard lights. The real work is the repair.

Here’s the paradox: We can craft a brilliant prompt that generates a slick framework… but once perfected, that prompt is a commodity anyone can copy.

The differentiation lives in the work around the prompt:

Before → Curation: real inputs from stakeholders, proprietary data, market nuance.
After → Interrogation: pushing the AI’s draft through real consulting filters:
– Diagnosis: what’s actually broken?
– Cost: what will it take to fix (money, time, politics)?
– Feasibility: can this org even pull it off?

A great prompt proves you know which questions to ask.
The moat is having the rigor (and courage) to challenge the answers.

The flood of easy AI content is creating “AI Workslop.” The only way past it isn’t better prompts — it’s better decisions.

How are you using AI as a first mile, not the finish line?

💬 Reddit Communities:

r/UnderstoodAI – Philosophical & practical AI alignment

r/AIPlaybook – Tactical frameworks & prompt design tools

r/BeUnderstood – AI guidance & human-AI communication

r/AdvancedLLM – CrewAI, LangChain, and agentic workflows

r/PromptPlaybook – Advanced prompting & context control

VSCode and Cursor aren’t just for developers.

Newsflash: Cursor isn’t just for developers. I’ve been using it as a context vault for everything I’m building & writing…NOT just code.

Outlines, characters, research all alive between sessions. No more context rot or pasting large chuncks of writing into a small prompt window that’s forgotten 5 minutes later.

Honestly here is just a short list of things you can do with Cursor:
•   ✍️ Write your 100-page novel with a dedicated assistant who already knows your plot, characters, tone.
•   📊 Build strategy decks where every amendment, every critical talking point, is preserved in context. No need to pause and recollect.
•   🗂️ Manage research & knowledge bases across topics. Weeks later, your AI will remember what you meant by “Plan A vs Plan B.”
•   🎮 My personal favorite – Design systems, games, products with shared reference docs so changes in one place reflect everywhere.

Here’s a VERY quick 2-step “how to start your novel, research, or even a PRD with really solid context”:
1. Create your reference docs in Cursor (traditionally that’s a “Claude.md”.
•   Include references to Character sheets: who people are, what their motives are
•   World / setting / tone doc: what style you’re going for, key rules
•   Plot outline: high-level beats
2. Instantiate your AI assistant using those docs as preloaded context
•   When you prompt, include reference links or identifiers rather than re-stating everything
•   As you write, update the docs in Cursor and let the assistant refer back. Treat it like shared memory

If you like thinking about how we can make communication easier with AI. Check out my “Designed to Be Understood” series where I explore this stuff in depth.

💬 Reddit Communities:

How to Write a $180K marketing strategy for 6 business locations in your area

Just helped create a $180K marketing strategy for 6 business locations in Westchester County — full competitive analysis, hyper-local targeting, community partnerships, and a week-by-week plan the teams can actually run.

Here’s the thing: small businesses need this level of rigor too — but not the $15K+ price tag.

So I built Nucleus — putting your small business at the center of your local market.

What makes it different:
🎯 Real market research (competitor analysis, customer demographics, local opportunities)
✅ Execution-ready plans (weekly milestones, owners, and budget by channel)
🔧 Industry-specific guidance tailored to your business type

I’m testing with 10 small businesses — full strategy (normally ~$2K) free during the pilot.

Comment “NUCLEUS” or DM your city + industry + budget range to get details.

hashtag#SmallBusinessMarketing hashtag#LocalMarketing hashtag#MarketingStrategy

💬 Reddit Communities:

Prompt Engineering: Making Viral Posts on LinkedIn Ethically

Every other day I see the same post: 👉 “Google, Harvard, and Microsoft are offering FREE AI courses.”

And every day I think: do we really need the 37th recycled list?

So instead of just pasting another one… I decided to “write” the ultimate prompt that anyone can use to make their own viral “Free AI Courses” post. 🧩

⚡ So… Here’s the Prompt (Copy -> Paste -> Flex):



You are writing a LinkedIn post that intentionally acknowledges the recycled nature of “Free AI Courses” list posts, but still delivers a genuinely useful, ultimate free AI learning guide.

Tone: Self-aware, slightly humorous, but still authoritative. Heavy on a the emoji use.
Structure:
1. Hook — wink at the sameness of these posts.
2. Meta transition — admit you asked AI to cut through the noise.
3. Numbered list — 7–9 resources, each with:
• Course name + source
• What you’ll learn
• How to access it for free
4. Mix big names + under-the-radar gems.
5. Closing — light joke + “What did I miss?” CTA.

Addendum: Expand to as many free AI/ML courses as LinkedIn’s 3,000-character limit will allow, grouped into Foundations / Intermediate / Advanced / Ethics.



💡 Translation: I’m not just tossing you another recycled list. I’m giving you the playbook for making one that feels fresh, funny, and actually useful. That’s the real power of AI—forcing everyone here to raise their game.

So take it, run it, grab a few free courses—and know you didn’t need someone else’s output to do it for you.

💪 Build authority by sharing what you learn.
🧠 Use AI for the grunt work so you can focus on insight.
💸 Save time, look smart, maybe even go viral while you’re at it.



🚀 And because I know people want the output itself… here’s a starter pack:
1. CS50’s Intro to AI with Python (Harvard) – Hands-on projects covering search, optimization, and ML basics. Free via edX (audit mode). 👉 cs50.harvard.edu/ai
2. Elements of AI (Univ. of Helsinki) – Friendly intro to AI concepts, no code required. 👉 elementsofai.com
3. Google ML Crash Course – Quick, interactive ML basics with TensorFlow. 👉 https://lnkd.in/eNTdD9Fm
4. fast.ai Practical Deep Learning – Build deep learning models fast. 👉 course.fast.ai
5. DeepMind x UCL Reinforcement Learning – The classic lectures by David Silver. 👉 davidsilver.uk/teaching


Happy weekend everyone!

💬 Reddit Communities:

What if AI Could Argue With Itself Before Advising Me?

“What if asking ChatGPT/Claude/Gemini wasn’t about getting the right answer — but watching the argument that gets you there?”

This is the question that caused me to launched GetIdea.ai — a side project that became a system, a system that became a mirror, and a mirror that occasionally throws insults at my ideas (thanks Harsh Critic 🔥).

Over the past few months, I’ve been building a multi-agent AI interface where ideas are tested not by a single voice, but by a council of distinct personalities. It’s not production-scale, but it’s already the most honest and useful AI interaction I’ve ever had. It started with a simple, frustrating problem:


🤔 The Problem: One Voice Is Almost Never Enough

First, if you’re at all like me you may feel most AI chats are really monologues disguised as a dialogue.

Even with all the right prompting, I still had to:

  • Play the Devil’s Advocate
  • Be the Strategic Thinker
  • Remember the market context
  • Question my own biases

And worst of all — I had to trust that the AI would play fair when it was really just playing along.

What I wanted wasn’t “help.” What I wanted was debate. Structured, selective, emotionally differentiated debate.


💡 The Concept: Assemble the Squad

So I built GetIdea.ai, a real-time multi-agent system where AI personas argue with each other so I don’t have to.

You ask a question — like:

“Should I quit my job to start an indie game studio?”

And instead of one fuzzy maybe-response, you get a brutal realist, a business strategist, and sometimes a Hype Champion trying to gas you up just enough to ignore them both.

What started as a test of CrewAI and WebSocket orchestration became a real product with:

  • Confidence-based agent routing
  • Conversation memory
  • Typing indicators
  • Real-time squad assembly
  • Modular personalities like:

This isn’t just multiple answers — it’s structured dissent, with context, consistency, and enough personality to feel like a team, not a trivia night.

🧠 How It Actually Works (And Why It Matters)

Let’s say you ask:

“Should I quit my job to start an indie game studio?”

Here’s what happens under the hood:

🧠 GetIdea.ai instantly assembles a confidence-filtered squad of AI advisors. In this case:

  • 🔥 Harsh Critic: high confidence in risk analysis, startup failure patterns, and tough love
  • 💰 Business Strategist: high confidence in financial models, scalability, and unit economics
  • 💡 Creative Catalyst: low confidence in risk — but very high in vision, innovation, and potential

Now here’s the kicker:

Each agent knows its strengths. It doesn’t pretend to be an expert in everything — it plays its role.


🎭 The Augment in Action (Yes, this really happened)

So, how does it work? Let’s use your question: “Should I quit my job to start an indie game studio?”

Instead of one answer, the system instantly assembles an advisory squad based on confidence. For this query, it routes to three distinct personas: a visionary, a pragmatist, and a realist.

Then, you get to watch the argument unfold in real-time.

🔥 Harsh Critic:

“The market is brutally competitive and often financially disastrous. Without groundbreaking innovation and a financial safety net, it’s a fool’s errand. Keep your job while testing your concept on the side.”

💰 Business Strategist:

“Consider your revenue model and target audience. Ensure diversified income streams and validate your concept with market research before making a decision.”

💡 Creative Catalyst (not triggered this time):

(Filtered out) — their confidence score wasn’t high enough to weigh in unless you lowered the threshold.

🔥 Harsh Critic:

“Sure, the Strategist makes good points — but you’re not in the business yet. Talking revenue models before you’ve validated demand is like planning a restaurant menu before knowing if anyone’s hungry.


🧪 Why That Matters

This isn’t a debate for entertainment. It’s a mirror for your own bias.

If you:

  • Want optimism → lower the threshold, and 💡 Creative Catalyst joins.
  • Want realism → leave it high, and only the tough love sticks.
  • Want synthesis → add 🔄 Synthesis Master, who bridges their insights into a decision framework.

That’s the hidden power of GetIdea.ai: It’s not answering your question — It’s helping you ask better ones.


🧪 The Experiment Behind the Scenes

There’s a hidden slider in the UI: Confidence Threshold. Slide it down, and you get wild ideas. Slide it up, and only the most certain agents speak.

That single control taught me more about my own bias than I expected. If I don’t want to hear Harsh Critic, it’s not because he’s wrong — it’s because I’m not ready for him. But when I am ready? His hit rate is scary.

Also — each conversation starts with “assembling your expert advisory team.” Because that’s how this should feel: like you’re being heard, not processed.


✨ Why This Matters (to Me and Maybe to You)

This isn’t a startup pitch. Not yet.

But it’s a signal. That we’re moving from:

  • Query → Answer to
  • Question → Assembly → Synthesis

That’s not just more useful — it’s more human.

And honestly? It made me want to ask better questions.


👀 Coming Next in the Series

In Part 2: “The Build”, I’ll share:

  • The architecture I’m modernizing
  • Why crew_chat.py is 2,100 lines of chaos (and still worked)
  • What went wrong (and hilariously right)
  • How this system gave me real-time feedback on my own decision patterns

And eventually in Part 3: “The Payoff”, I’ll show where this is going — and why multi-agent systems might become the UI layer for better thought, not just better output.


✅ TL;DR (because I built this for people like me):

GetIdea.ai is:

  • A real, working multi-agent chat system
  • Built in CrewAI, FastAPI, and WebSocket magic
  • Designed to simulate collaborative, conflicting, yet emotionally readable decision-making
  • Still messy under the hood, but intentionally honest in tone

And maybe… it’s the future of how we talk to machines. By teaching them to talk to each other first.


🔗 Your Turn: Test It, Shape It, or Join In

The project is live, and this is where you come in. I’d be grateful for your help in any of these three ways:

  1. 🧪 Share Your Results: Try the tool with a real problem you’re facing. Post the most surprising or insightful piece of advice you get in the comments below.
  2. 💡 Suggest a Persona: What expert is missing from the council? A ‘Legal Advisor’? A ‘Marketing Guru’? Comment with the persona you think I should build next.
  3. 🤝 Become a Beta Tester: For those who want to go a step further, I’m looking for a handful of people for a 15-minute feedback session to help improve the experience. If you’re interested, just comment “I’m in!”

You can try the system right here: GetIdea.ai

I’m excited to hear what you think!

Why the “Worse” PM Job Might Be the Safer One Right Now

I used to think my biggest strength as a product leader was being a breaker of silos. I’m a business and systems architect at heart — the kind who refuses to just “ship fast” and instead builds systems and processes that make good products easier to ship.

The irony? Those same systems may have made it easier to replace the decision-making with AI.

That’s why a recent post about two Senior PMs stuck with me:

  • Senior PM A — Clear roadmap, supportive team, space to decide, loves the job.
  • Senior PM B — Constant firefighting, no clear goals, drowning in meetings, exhausted.

Same title. Same salary. Completely different realities.


The obvious answer

Most people see this and think: “Clearly, Senior PM A has the better gig. Who wouldn’t want clarity, respect, and breathing room?”

I agree — if you’re talking about today’s workplace.


The AI-era twist

In a well-oiled, optimized system, Senior PM A’s decisions follow predictable patterns: Quarterly planning? Review the metrics, weigh the trade-offs, pick a path. Feature prioritization? Run it through the scoring model. Resource allocation? Follow the established framework.

Those are exactly the kinds of structured, rules-based decisions AI can handle well — not because they’re trivial, but because they have clear inputs and repeatable logic.

Senior PM B’s world is different. One week it’s killing a feature mid-sprint because a major client threatened to churn over an unrelated issue. The next, it’s navigating a regulatory curveball that suddenly affects three product lines. Then the CEO declares a new strategic pivot — immediately.

This isn’t just chaos. It’s high-stakes problem-solving with incomplete data, shifting constraints, and human dynamics in the mix. Right now, that’s still work AI struggles to do.


Why chaos can be strategic

If you’re Senior PM B, you’re not just firefighting. You’re building skills that are harder to automate:

  • Reading between the lines — knowing when “customers are asking for this” means three key deals are at risk vs. one loud voice in the room.
  • Navigating crosscurrents — redirecting an “urgent” marketing request toward something that actually moves the business.
  • Making judgment calls with partial data — acting decisively while staying ready to adapt.

These skills aren’t “soft.” They’re advanced problem-solving abilities: reading between the lines, navigating political currents, and making judgment calls with partial data. AI can process information, but right now, it struggles to match human problem-solving in high-context, high-stakes situations.


How to use the advantage

If you’re in the chaos seat, you have leverage — but only if you’re intentional:

  1. Document your decisions — keep a log that shows how you reason through ambiguity, not just what you decided.
  2. Translate chaos into patterns — identify which recurring problems point to deeper systemic fixes.
  3. Build your network — the people you can call in a pinch are as valuable as any process.

The long game

Eventually, AI will get better at handling some of this unpredictability too. But the people best positioned to design that AI? They’re the ones who’ve lived the chaos and know which decisions can be structured — and which can’t.


The takeaway

In the AI era, the “worse” jobs might be the ones teaching you the most resilient skills — especially the hardest to teach: problem solving. So, if you’re Senior PM B right now, you may be tired — but you’re also learning how to make high-context, high-stakes calls in ways AI can’t yet match.

The key is to treat it as training for the future, not just survival in the present.

From Prompt Engineering to the Cognitive Mesh: Mapping the Future of AI Interaction

What if AI stopped being a tool and started being a participant?

In the early days of generative AI, we obsessed over prompts. “Say the magic words,” we believed, and the black box would reward us. But as AI systems mature, a new truth is emerging: It’s not what you say to the model. It’s how much of the world it understands.

In my work across enterprise AI, product design, and narrative systems, I’ve started seeing a new shape forming. One that reframes our relationship with AI from control to collaboration to coexistence. Below is the framework I use to describe that evolution.

Each phase marks a shift in who drives, what matters, and how value is created.

🧱 Phase 1: Prompt Engineering (Human)

Say the magic words.

This is where it all began. Prompt engineering is the art of crafting inputs that unlock high-quality outputs from language models. It’s clever, creative, and sometimes fragile.

Like knowing in 2012 that the best way to get an honest answer from Google was to add the word “reddit” to the end of your search.

Think: ChatGPT guides, jailbreaking tricks, or semantic games to bypass filters. But here’s the limitation: prompts are static. They don’t know you. They don’t know your system. And they don’t scale.

🧠 Phase 2: Context Engineering (Human)

“Feed it more of the world.”

In this phase, we stop trying to outsmart the model and start enriching it. Context Engineering is about structuring relevant information—documents, style guides, knowledge graphs, APIs, memory—to simulate real understanding. It’s the foundation of Retrieval-Augmented Generation (RAG), enterprise copilots, and memory-augmented assistants. This is where most serious AI products live today. But context alone doesn’t equal collaboration. Which brings us to what’s next.

🎼 Phase 3: Cognitive Orchestrator (Human-in-the-loop)

“Make the system aware of itself.”

This phase marks the shift from feeding AI to aligning it. The Cognitive Orchestrator is not prompting or contextualizing—they’re composing the system. They design how the AI fits into workflows, reacts to tension, integrates across timelines, and adapts to team dynamics. It’s orchestration, not instruction.

Example 1:

Healthcare: An AI in a hospital emergency room coordinates real-time patient data, staff schedules, and equipment availability. It doesn’t just process inputs—it anticipates triage needs, flags potential staff fatigue from shift patterns, and suggests optimal resource allocation while learning from doctors’ feedback.

The system maintains feedback loops with clinicians, weighting their overrides as higher-signal inputs to refine its triage algorithms. Blending actual human intuition with pattern recognition.

Example 2:

Agile Software Development: Imagine an AI integrated into a DevOps pipeline, analyzing code commits, sprint progress, and team communications. It detects potential delays, suggests task reprioritization based on developer workload, and adapts to shifting project requirements, acting as a real-time partner that evolves alongside the team.

This is the human’s last essential role before orchestration gives way to emergence.

🔸 Phase 4: Cognitive Mesh (AI)

“Weave the world back together.”

Now the AI isn’t being engineered—it’s doing the weaving. In a Cognitive Mesh, AI becomes a living participant across tools, teams, data streams, and behaviors. It observes. It adapts. It reflects. And critically, it no longer needs to be driven by a human hand. The orchestrator becomes the observed.

It’s speculative, yes. But early signals are here: agent swarms, autonomous copilots, real-time knowledge graphs.

Example 1:

Autonomous Logistics Networks: Picture a global logistics network where AI agents monitor weather, port congestion, and market demands, autonomously rerouting shipments, negotiating with suppliers, and optimizing fuel costs in real time.

These agents share insights across organizations, forming an adaptive ecosystem that balances cost, speed, and sustainability without human prompts.

Example 2:

Smart Cities: AI systems in smart cities, like those managing energy grids, integrate real-time data from traffic, weather, and citizen feedback to optimize resource distribution. These systems don’t just follow rules, they evolve strategies by learning from cross-domain patterns, such as predicting energy spikes from social media trends.

Transition Markers:

  • AI begins initiating actions based on patterns humans haven’t explicitly programmed. For example, an AI managing a retail supply chain might independently adjust inventory based on social media sentiment about a new product, without human prompting.
  • AI develops novel solutions by combining insights across previously disconnected domains. Imagine an AI linking hospital patient data with urban traffic patterns to optimize ambulance routes during rush hour.
  • AI systems develop shared protocols (e.g., research AIs publishing findings to a decentralized ledger, where climate models in Europe auto-update based on Asian weather data).

We’re already seeing precursors in decentralized AI frameworks like AutoGen and IoT ecosystems, such as smart grids optimizing energy across cities. The mesh is forming. We should decide how we want to exist inside it.

From Engineer to Ecosystem

Prompt Engineering was about asking the right question. Context Engineering gave it the background. Cognitive Orchestration brought AI into the room. Cognitive Mesh gives it a seat at the table and sometimes at the head.

This is the arc I see emerging. And it’s not just technical—it’s cultural. The question isn’t

“how smart will AI get?”

It’s:

How do we design systems where we still matter when it does?

So, my open offer, let’s shape it together. If this framework resonates or, even, if it challenges how you see your role in AI systems. I’d love to hear your thoughts.

Are you building for Phase 1-2 or Phase 4? What term lands with you: Cognitive Mesh or Cognitive Orchestrator? Drop a comment or DM me.

This story isn’t done being written, not by a long shot.

Walter Reid is the creator of the “Designed to Be Understood” AI series and a product strategist focused on trust, clarity, and the systems that hold them.

#AI #DesignedToBeUnderstood #FutureOfWork #CognitiveMesh #PromptEngineering #AIWorkflowDesign

Works Cited

Phase 1: Prompt Engineering

Hugging Face. “Prompt Engineering Guide.” 2023. Link

Liu, Pengfei, et al. “Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in NLP.” ACM Computing Surveys, 2023. Link

Phase 2: Context Engineering

Lewis, Patrick, et al. “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.” NeurIPS, 2020. Link

Ou, Yixin, et al. “Knowledge Graphs Empower LLMs: A Survey.” arXiv, 2024. Link

Pinecone. “Building RAG with Vector Databases.” 2024. Link

Phase 3: Cognitive Orchestrator

Gao, Yunfan, et al. “AutoGen: Enabling Next-Gen LLM Apps via Multi-Agent Conversation.” arXiv, 2023. Link

Zhang, Chi, et al. “AI-Enhanced Project Management.” IEEE, 2024. Link

Microsoft. “Copilot for Microsoft 365: AI in Workflows.” 2024. Link

Anthropic. “Constitutional AI.” arXiv, 2022. Link

Phase 4: Cognitive Mesh

Amodei, Dario, et al. “On the Opportunities and Risks of Foundation Models.” arXiv, 2021. Link

Heer, Jeffrey. “Agency in Decentralized AI Systems.” ACM Interactions, 2024. Link

IBM Research. “AI and IoT for Smart Cities.” 2023. Link

Russell, Stuart. Human Compatible. Viking Press, 2019.

Google Research. “Emergent Abilities of Large Language Models.” 2024.

Park, Joon Sung, et al. “Generative Agents.” Stanford/Google Research, 2023. Link

OpenAI. “Multi-Agent Reinforcement Learning in Complex Environments.” 2024.

Stanford. “Generative Agents: Interactive Simulacra of Human Behavior” 2023.

AI is given a name when the AI Product finds Market Fit

Calling 2025 “the year of AI model architectures” feels a bit like saying “you should add ‘Reddit’ to your Google search to get better results.”

It’s not wrong. It’s just… a little late to the conversation.

Here’s how long these model types have actually been around:
•   LLMs – 2018 (GPT-2, BERT)
•   MLMs – 2018 (BERT, the original bidirectional model)
•   MoE – 2017–2021 (Switch Transformer, GShard)
•   VLMs – 2020–2021 (CLIP, DALL·E)
•   SLMs – 2022–2023 (DistilBERT, TinyGPT, Phi-2)
•   SAMs – 2023 (Meta’s Segment Anything)
•   LAMs – 2024–2025 (Tool-using agents, Gemini, GPT-4o)
•   LCMs – 2024–2025 (Meta’s SONAR embedding space)

These aren’t new ideas. They’re rebrands of ideas that finally hit product-market-fit.

Custom GPT: Radically Honest

This image was made not to sell something — but to show something.
It’s the visual blueprint of a GPT called Radically Honest, co-designed with me by a GPT originally configured to make games.
That GPT didn’t just help build another assistant — it helped build a mirror. One that shows how GPTs are made, what their limits are, and where their values come from.
The system prompt, the story, the scaffolding — it’s all in the open.
Because transparency isn’t just a feature. It’s a foundation.
👉 Explore it here: https://lnkd.in/eBENt_gj

Description of the Custom GPT: “Radically Honest is a GPT that prioritizes transparency above all else. It explains how it works, what it knows, what it doesn’t — and why. You can ask it about its logic, instructions, reasoning, and even its limits. It is optimized to be trustworthy and clear.”


hashtag#AIethics hashtag#PromptDesign hashtag#RadicallyHonest hashtag#GPT hashtag#Transparency hashtag#DesignTrust

A special thanks to Custom GPT “Game Designer” who author this piece and helped build a unique kind of GPT.

✍️ Written by Walter Reid at https://www.walterreid.com

🧠 Creator of Designed to Be Understood at (LinkedIn) https://www.linkedin.com/newsletters/designed-to-be-understood-7330631123846197249 and (Substack) https://designedtobeunderstood.substack.com

🧠 Check out more writing by Walter Reid (Medium) https://medium.com/@walterareid

🔧 He is also a (subreddit) creator and moderator at: r/AIPlaybook at https://www.reddit.com/r/AIPlaybook for more tactical frameworks and prompt design tools. r/AIPlaybook at https://www.reddit.com/r/BeUnderstood/ for additional AI guidance. r/AdvancedLLM at https://www.reddit.com/r/AdvancedLLM/ where we discuss LangChain and CrewAI as well as other Agentic AI topics for everyone. r/PromptPlaybook at https://www.reddit.com/r/PromptPlaybook/ where I show advanced techniques for the advanced prompt (and context) engineers. Finally r/UnderstoodAI https://www.reddit.com/r/UnderstoodAI/ where we confront the idea that LLMs don’t understand us — they model us. But what happens when we start believing the model?