“Mathematics” edition.
Let’s say you have two people:
• Person A has a master’s degree in mathematics but no access to AI.
• Person B has only a basic education — but has 24/7 access to advanced AI tools like ChatGPT, Claude, or Wolfram.
Here’s the question that’s been eating at me:
Which one has more potential to be “better” at math?
(And yes — I’m intentionally putting “potential” and “better” in quotes.)
Does formal education outweigh intelligence amplified by tools?
Does AI unlock new ceilings — or just shortcut the path to shallow answers?
Can a machine-augmented thinker surpass someone with years of abstract (problem-solving) training?
I’m not sure there’s a clean answer. But I’m very sure it’s the kind of question we need to start asking. If not for those in the market today, but for those in the market soon.
I’d love to hear your take — especially if you’ve seen this play out in real life. 👇
🧠 More questions like this in my newsletter below
- 🌐 Official Site: walterreid.com – Walter Reid’s full archive and portfolio
- 📰 Substack: designedtobeunderstood.substack.com – long-form essays on AI and trust
- 🪶 Medium: @walterareid – cross-posted reflections and experiments
💬 Reddit Communities:
- r/AIPlaybook – Tactical frameworks & prompt design tools
- r/BeUnderstood – AI guidance & human-AI communication
- r/AdvancedLLM – CrewAI, LangChain, and agentic workflows
- r/PromptPlaybook – Advanced prompting & context control
- r/UnderstoodAI – Philosophical & practical AI alignment
