Or: How I Learned to Stop Prompt-and-Praying and Start Building Reusable Systems
Learning How to Encode Your Creative
I’m about to share working patterns that took MONTHS to discover. Not theory — lived systems architecture applied to a creative problem that most people are still solving with vibes and iteration.
If you’re here because you’re tired of burning credits on video generations that miss the mark, or you’re wondering why your brand videos feel generic despite detailed prompts, or you’re a systems thinker who suspects there’s a better way to orchestrate creative decisions — this is for you. (Meta Note: This also works for images and even music)
The Problem: The Prompt-and-Pray Loop
Most people are writing video prompts like they’re texting a friend.
Here’s what that looks like in practice:
- Write natural language prompt: “A therapist’s office with calming vibes and natural light”
- Generate video (burn credits)
- Get something… close?
- Rewrite prompt: “A peaceful therapist’s office with warm natural lighting and plants”
- Generate again (burn more credits)
- Still not quite right
- Try again: “A serene therapy space with soft morning sunlight streaming through windows, indoor plants, calming neutral tones”
- Maybe this time?
The core issue isn’t skill — it’s structural ambiguity.
When you write “a therapist’s office with calming vibes,” you’re asking the AI to:
- Invent the color palette (cool blues? warm earth tones? clinical whites?)
- Choose the lighting temperature (golden hour? overcast? fluorescent?)
- Decide camera angle (wide establishing shot? intimate close-up?)
- Pick props (modern minimalist? cozy traditional? clinical professional?)
- Guess the emotional register (aspirational? trustworthy? sophisticated?)
Every one of those is a coin flip. And when the output is wrong, you can’t debug it because you don’t know which variable failed.
The True Cost of Video Artifacts
It’s not just credits. It’s decision fatigue multiplied by uncertainty. You’re making creative decisions in reverse — reacting to what the AI guessed instead of directing what you wanted.
For brands, this gets expensive fast:
- Inconsistent visual language across campaigns
- No way to maintain character/scene consistency across shots
- Can’t scale production without scaling labor and supervision
- Brand identity gets diluted through iteration drift
This is the prompt tax on ambiguity.
The Insight: Why JSON Changes Everything
Here’s the systems architect perspective that changes everything:
Traditional prompts are monolithic. JSON prompts are modular.
When you structure a prompt like this:
{
"scene": {
"title": "Therapy Space",
"style": {
"render": "Documentary realism",
"lighting": "Soft natural light, morning golden hour",
"camera_equipment": "35mm, shallow DOF, handheld stability"
},
"character": {
"appearance": "Not shown — focus on environment",
"emotional_journey": "Calm anticipation"
},
"environment": {
"location": "Converted brownstone therapy office, NYC",
"props": ["Leather armchair", "Small side table", "Tissue box", "Window with sheer curtains"],
"atmospherics": "Quiet, warm, safe"
}
}
}
You’re doing something profound: separating concerns.
Now when something’s wrong, you know where it’s wrong:
- Lighting failed? →
style.lighting - Character doesn’t match? →
character.appearance - Camera motion is jarring? →
style.camera_equipment - Props feel off? →
environment.props
This is human debugging for creativity.
The Deeper Game: Composability
JSON isn’t just about fixing errors — it’s about composability.
You can now:
- Save reusable templates: “intimate conversation,” “product reveal,” “chase scene,” “cultural moment”
- Swap values programmatically: Same structure, different brand/product/message
- A/B test single variables: Change only lighting while holding everything else constant
- Scale production without scaling labor: Generate 20 product videos by looping through a data structure
This is the difference between artisanal video generation and industrial-strength content production.
The Case Study: Admerasia
Let me show you why this matters with a real example.
Understanding the Brand
Admerasia is a multicultural advertising agency founded in 1993, specializing in Asian American marketing. They’re not just an agency — they’re cultural translators. Their tagline tells you everything: “Brands & Culture & People”.
That “&” isn’t decoration. It’s philosophy. It represents:
- Connection: Bridging brands with diverse communities
- Conjunction: The “and” that creates meaning between things
- Cultural fluency: Understanding the spaces between cultures
Their clients include McDonald’s, Citibank, Nissan, State Farm — Fortune 500 brands that need authentic cultural resonance, not tokenistic gestures.
The Challenge
How do you create video content that:
- Captures Admerasia’s cultural bridge-building mission
- Reflects the “&” motif visually
- Feels authentic to Asian American experiences
- Works across different contexts (brand partnerships, thought leadership, social impact)
Traditional prompting would produce generic “diverse people smiling” content. We needed something that encodes cultural intelligence into the generation process.
The Solution: Agentic Architecture
I built a multi-agent system using CrewAI that treats video prompt generation like a creative decision pipeline. Each agent handles one concern, with explicit handoffs and context preservation.
Here’s the architecture:
Brand Data (JSON)
↓
[Brand Analyst] → Analyzes identity, builds mood board
↓
[Business Creative Synthesizer] → Creates themes based on scale
↓
[Vignette Designer] → Designs 6-8 second scene concepts
↓
[Visual Stylist] → Defines aesthetic parameters
↓
[Prompt Architect] → Compiles structured JSON prompts
↓
Production-Ready Prompts (JSON)
Let’s Walk Through It
Agent 1: Brand Analyst
What it does: Understands the brand’s visual language and cultural positioning
Input: Brand data from brand.json:
{
"name": "Admerasia",
"key_traits": [
"Full-service marketing specializing in Asian American audiences",
"Expertise in cultural strategy and immersive storytelling",
"Known for bridging brands with culture, community, and identity"
],
"slogans": [
"Brands & Culture & People",
"Ideas & Insights & Identity"
]
}
What it does:
- Performs web search to gather visual references
- Downloads brand-relevant imagery for mood board
- Identifies visual patterns: color palettes, composition styles, cultural symbols
- Writes analysis to test output for validation
Why this matters: This creates a reusable visual vocabulary that ensures consistency across all generated prompts. Every downstream agent references this same foundation.
Agent 2: Business Creative Synthesizer
What it does: Routes creative direction based on business scale and context
This is where most prompt systems fail. They treat a solo therapist and Admerasia the same way.
The routing logic:
def scale_to_emotional_scope(scale):
if scale in ["solo", "small"]:
return "intimacy, daily routine, personalization, local context"
elif scale == "midsize":
return "professionalism, community trust, regional context"
elif scale == "large":
return "cinematic impact, bold visuals, national reach"
For Admerasia (midsize agency):
- Emotional scope: Professional polish + cultural authenticity
- Visual treatment: Cinematic but grounded in real experience
- Scale cues: NYC-based, established presence, thought leadership positioning
Output: 3 core visual/experiential themes:
- Cultural Bridge: Showing connection between brand and community
- Strategic Insight: Positioning Admerasia as thought leaders
- Immersive Storytelling: Their creative process in action
Agent 3: Vignette Designer
What it does: Creates 6-8 second scene concepts that embody each theme
Example vignette for “Cultural Bridge” theme:
Concept: Street-level view of NYC featuring Admerasia’s “&” motif in urban context
Scene beats:
- Opening: Establishing shot of NYC street corner
- Movement: Slow tracking shot past bilingual mural
- Focus: Typography revealing “Brands & Culture & People”
- Atmosphere: Ambient city energy with cross-cultural music
- Emotion: Curiosity → connection
Agent 4: Visual Stylist
What it does: Defines color palettes, lighting, camera style
For Admerasia:
- Color palette: Warm urban tones with cultural accent colors
- Lighting: Natural late-afternoon sunlight (aspirational but authentic)
- Camera style: Tracking dolly (cinematic but observational)
- Visual references: Documentary realism meets brand film polish
Agent 5: Prompt Architect
What it does: Compiles everything into structured JSON
Here’s the actual output:
{
"model": "google_veo_v3",
"reasoning": "Showcasing Admerasia's cultural bridge-building in a vibrant city setting.",
"scene": {
"title": "Bridge of Stories",
"duration_seconds": 8,
"fps": 30,
"aspect_ratio": "16:9",
"style": {
"render": "cinematic realism",
"lighting": "warm late-afternoon sunlight",
"camera_equipment": "tracking dolly"
},
"character": {
"name": "None",
"appearance": "n/a",
"emotional_journey": "curiosity → connection"
},
"environment": {
"location": "NYC street corner featuring bilingual murals",
"props": ["reflective street art", "subtle cross-cultural symbols"],
"atmospherics": "ambient city bustle with soft cross-cultural music"
},
"script": [
{
"type": "stage_direction",
"character": "None",
"movement": "slow track past mural clearly reading 'Brands & Culture & People' in bold typography"
}
]
}
}
Why This Structure Works
Contrast this with a naive prompt:
❌ Naive: “Admerasia agency video showing diversity and culture in NYC”
✅ Structured JSON above
The difference?
The first is a hope. The second is a specification.
The JSON prompt:
- Explicitly controls lighting and time of day
- Specifies camera movement type
- Defines the emotional arc
- Identifies precise visual elements (mural, typography)
- Includes audio direction
- Maintains the “&” motif as core visual identity
Every variable is defined. Nothing is left to chance.
The Three Variables You Can Finally Ignore
This is where systems architecture diverges from “best practices.” In production systems, knowing what not to build is as important as knowing what to build.
1. Ignore generic advice about “being descriptive”
Why: Structure matters more than verbosity.
A tight JSON block beats a paragraph of flowery description. The goal isn’t to write more — it’s to write precisely in a way machines can parse reliably.
2. Ignore one-size-fits-all templates
Why: Scale-aware routing is the insight most prompt guides miss.
Your small business localizer (we’ll get to this) shows this perfectly. A solo therapist and a Fortune 500 brand need radically different treatments. The same JSON structure, yes. But the values inside must respect business scale and context.
3. Ignore the myth of “perfect prompts”
Why: The goal isn’t perfection. It’s iterability.
JSON gives you surgical precision for tweaks:
- Change one field:
"lighting": "golden hour"→"lighting": "overcast soft" - Regenerate
- Compare outputs
- Understand cause and effect
That’s the workflow. Not endless rewrites, but controlled iteration.
The Transferable Patterns
You don’t need my exact agent setup to benefit from these insights. Here are the patterns you can steal:
Pattern 1: The Template Library
Build a collection of scene archetypes:
- Intimate conversation
- Product reveal
- Chase scene
- Cultural moment
- Thought leadership
- Behind-the-scenes
Each template is a JSON structure with placeholder values. Swap in your specific content.
Pattern 2: Constraint Injection
Define “avoid” and “include” lists per context:
{
"scene_constraints": {
"avoid": ["corporate sterility", "stock photo aesthetics", "tokenistic diversity"],
"include": ["authentic cultural markers", "urban NYC texture", "observable human scale"]
}
}
These guide without dictating. They’re creative boundaries, not rules.
Pattern 3: Scale Router
Branch creative direction based on business size:
- Solo/small → Grounded, local, human-scale
- Midsize → Polished, professional, community-focused
- Large → Cinematic, bold, national reach
Same JSON structure. Different emotional register.
Pattern 4: Atomic Test
When debugging, change ONE field at a time:
- Test lighting variations while holding camera constant
- Test camera movement while holding lighting constant
- Build intuition for what each parameter actually controls
Pattern 5: Batch Generation
Loop over data, inject into template, generate at scale:
for brand in brands:
prompt = template.copy()
prompt["scene"]["environment"]["location"] = brand.location
prompt["scene"]["style"]["lighting"] = brand.lighting_preference
generate_video(prompt)
This is the power of structured data.
The System in Detail: Agent Architecture
Let’s look at how the agents actually work together. Each agent in the pipeline has a specific role defined in roles.json:
Agent Roles & Tools
{
"role": "Brand Analyst",
"goal": "Analyze brand data and create visual mood boards",
"tools": ["WebSearchTool", "MoodBoardImageTool", "FileWriterTool"],
"allow_delegation": false
}
Why these tools?
WebSearchTool: Gathers brand context and visual referencesMoodBoardImageTool: Downloads images with URL validation (rejects social media links)FileWriterTool: Saves analysis for downstream agents
The key insight: No delegation. The Brand Analyst completes its work independently, creating a stable foundation for other agents.
Agent 2: Business Creative Synthesizer
{
"role": "Business Creative Synthesizer",
"goal": "Translate business identity and scale into appropriate creative themes",
"tools": ["WebSearchTool", "FileWriterTool"],
"allow_delegation": true
}
Why delegation is enabled: This agent may need input from other specialists when dealing with complex brand positioning.
The scale-aware routing happens in tasks.py:
def scale_to_emotional_scope(scale):
if scale in ["solo", "small"]:
return "intimacy, daily routine, personalization, local context"
elif scale == "midsize":
return "professionalism, community trust, mild polish"
elif scale == "large":
return "cinematic impact, bold visuals, national reach"
For Admerasia (midsize agency), this returns: “professionalism, community trust, mild polish, neighborhood or regional context”
The SmallBusiness Localizer (Conditional)
This agent only activates for scale == "small". It uses small_business_localizer.json to inject business-type-specific constraints:
{
"business_type": "psychologist",
"scene_constraints": {
"avoid": ["clients in distress", "hospital-like aesthetics"],
"include": ["calm décor", "natural light", "welcoming atmosphere"]
}
}
For Admerasia: This agent didn’t trigger (midsize), but its output shows how it would have guided downstream agents with grounded constraints.
What This Actually Looks Like: The Admerasia Pipeline
Let’s trace the actual execution with real outputs from the system.
Input: Brand Data
{
"name": "Admerasia",
"launch_year": 1993,
"origin": "Multicultural advertising agency based in New York City, NY",
"key_traits": [
"Full-service marketing specializing in Asian American audiences",
"Certified minority-owned small business with over 30 years of experience",
"Expertise in cultural strategy, creative production, media planning",
"Creates campaigns that bridge brands with culture, community, and identity"
],
"slogans": [
"Brands & Culture & People",
"Ideas & Insights & Identity"
]
}
Agent 1 Output: Brand Analyst
Brand Summary for Admerasia:
Tone: Multicultural, Inclusive, Authentic
Style: Creative, Engaging, Community-focused
Key Traits: Full-service marketing agency, specializing in Asian American
audiences, cultural strategy, creative production, and cross-cultural engagement.
Downloaded Images:
1. output/admerasia/mood_board/pexels-multicultural-1.jpg
2. output/admerasia/mood_board/pexels-multicultural-2.jpg
3. output/admerasia/mood_board/pexels-multicultural-3.jpg
4. output/admerasia/mood_board/pexels-multicultural-4.jpg
5. output/admerasia/mood_board/pexels-multicultural-5.jpg
What happened: The agent identified the core brand attributes and created a mood board foundation. These images become visual vocabulary for downstream agents.
Agent 2 Output: Creative Synthesizer
Proposed Themes:
1. Cultural Mosaic: Emphasizing the rich diversity within Asian American
communities through shared experiences and traditions. Features local events,
family gatherings, and community celebrations.
2. Everyday Heroes: Focuses on everyday individuals within Asian American
communities who contribute to their neighborhoods—from local business owners
to community leaders.
3. Generational Connections: Highlighting narratives that span across generations,
weaving together the wisdom of elders with the aspirations of youth.
The decision logic:
- Recognized Admerasia’s midsize scale
- Applied “professionalism, community trust” emotional scope
- Created themes that balance polish with authentic community storytelling
- Avoided both hyper-local (too small) and cinematic-epic (too large) treatments
Agent 3 Output: SmallBusiness Localizer
Even though this agent didn’t need to activate (Admerasia isn’t small-scale), it ran and documented constraints:
Scene Constraints:
- Include: Collaborative spaces with diverse staff, natural light, cultural
artifacts mixed with modern design, community events like festivals
- Avoid: Direct owner depictions, abstract or overly cinematic styles that
detract from intimacy and grounded narrative
Why this matters: These constraints inform how scenes should feel — professional but approachable, cultural but not tokenistic.
Agent 4 Output: Vignette Designer
Here are 2 of the 6 vignette concepts created:
1. Cultural Mosaic - Street Festival:
Scene of a vibrant local street festival where diverse cultural elements
come alive. Music fills the air as people of different heritages engage
in dance, showcasing traditional attire. A child taste-tests a dish from
a different culture, eyes lighting up with delight.
2. Generational Connections - Baking Together:
A cozy indoor scene with warm lighting where a grandparent sits with their
grandchild. The grandparent shares stories while showing old family photos.
Soft background music enhances the intimate dialogue and connection.
The creative translation: Each vignette embodies one of the three themes with specific, filmable moments.
Agent 5 Output: Visual Stylist
Color Palettes:
Cultural Mosaic:
- Primary: Rich Reds, Deep Oranges, Vibrant Yellows
- Secondary: Cool Blues, Soft Greens
- Accent: Bright Purples
- Tone: Warm and inviting, highlighting vibrancy and diversity
Generational Connections:
- Primary: Soft Pastels (Light Pinks, Blues, Yellows)
- Secondary: Earthy Greens, Light Beiges
- Accent: Warm Reds
- Tone: Intimate and heartfelt, showcasing personal connections
Style References:
- Natural light enhancing warmth and authenticity
- Documentary-style cinematography emphasizing genuine interactions
- Cultural elements subtly incorporated throughout
The aesthetic system: Each theme gets its own color language and lighting approach.
Agent 6 Output: Prompt Architect
Here’s one of the four final JSON prompts generated:
{
"model": "google_veo_v3",
"reasoning": "Utilized for its capability to capture vibrant community interactions and cultural storytelling.",
"scene": {
"title": "Cultural Mosaic - Street Festival",
"duration_seconds": 10,
"fps": 30,
"aspect_ratio": "16:9",
"style": {
"render": "colorful",
"lighting": "natural",
"camera_equipment": "handheld"
},
"character": {
"name": "Festival Attendees",
"appearance": "Diverse traditional attires reflecting different cultures",
"emotional_journey": "Joyful engagement and celebration"
},
"environment": {
"location": "Local street festival",
"props": ["colorful banners", "food stalls", "dancers"],
"atmospherics": "Lively music, laughter, and the smell of various cuisines"
},
"script": [
{
"type": "stage_direction",
"character": "Dancer",
"movement": "twirls joyfully, showcasing vibrant outfit"
},
{
"type": "dialogue",
"character": "Child",
"line": "Wow, can I try that dish?"
}
]
}
}
What Makes This Prompt Powerful
Compare this to what a naive prompt would look like:
❌ Naive prompt: “Asian American street festival with diverse people celebrating”
✅ Structured prompt (above)
The differences:
- Explicit visual control:
- Style render: “colorful” (not just implied)
- Lighting: “natural” (specific direction)
- Camera: “handheld” (conveys documentary authenticity)
- Emotional arc defined:
- “Joyful engagement and celebration” (not left to interpretation)
- Scene composition specified:
- Props enumerated: banners, food stalls, dancers
- Atmospherics described: music, laughter, smells
- Creates multi-sensory specificity
- Character and action scripted:
- Stage direction: dancer twirls
- Dialogue: child’s authentic reaction
- These create narrative momentum in 10 seconds
- Model selection justified:
- Reasoning field explains why Veo3
- “Capability to capture vibrant community interactions”
The Complete Output Set
The system generated 4 prompts covering all three themes:
- Cultural Mosaic – Street Festival (community celebration)
- Everyday Heroes – Food Drive (community service)
- Generational Connections – Baking Together (family tradition)
- Cultural Mosaic – Community Garden (intercultural exchange)
Each prompt follows the same JSON structure but with values tailored to its specific narrative and emotional goals.
What This Enables
For Admerasia’s creative team:
- Drop these prompts directly into Veo3
- Generate 4 distinct brand videos in one session
- Maintain visual consistency through structured style parameters
- A/B test variations by tweaking single fields
For iteration:
// Want warmer lighting?
"lighting": "natural" → "lighting": "golden hour"
// Want steadier camera?
"camera_equipment": "handheld" → "camera_equipment": "gimbal stabilized"
// Want different aspect ratio?
"aspect_ratio": "16:9" → "aspect_ratio": "9:16"
Change one line, regenerate, compare. Surgical iteration.
The Pipeline Success
From the final status output:
SUCCESS
The JSON file has been created and saved at 'output/admerasia/ad_prompts.json'
containing structured video prompts for each vignette.
Total execution:
- Input: Brand JSON + agent configuration
- Output: 4 production-ready video prompts
- Time: ~5 minutes of agent orchestration
- Human effort: Zero (after initial setup)
The Philosophy Shift
Most people think prompting is about describing what you want.
That’s amateur hour.
Prompting is about encoding your creative judgment in a way machines can execute.
JSON isn’t just a format. It’s a discipline. It forces you to:
- Separate what matters from what doesn’t
- Make your assumptions explicit
- Build systems, not one-offs
- Scale creative decisions without diluting them
This is what separates the systems architects from the hobbyists.
You’re not here to type better sentences.
You’re here to build leverage.
How to Build This Yourself
You don’t need my exact setup to benefit from these patterns. Here are three implementation paths, from manual to fully agentic:
Option 1: Manual Implementation (Start Here)
What you need:
- A text editor
- A JSON validator (any online tool works)
- Template discipline
The workflow:
- Create your base template by copying this structure:
{
"model": "google_veo_v3",
"scene": {
"title": "[Scene Name]",
"duration_seconds": 8,
"fps": 30,
"aspect_ratio": "16:9",
"style": {
"render": "[visual style]",
"lighting": "[lighting direction]",
"camera_equipment": "[camera/lens type]"
},
"character": {
"name": "[character identifier]",
"appearance": "[visual description]",
"emotional_journey": "[start emotion] → [end emotion]"
},
"environment": {
"location": "[specific place]",
"props": ["item 1", "item 2", "item 3"],
"atmospherics": "[mood, sounds, atmosphere]"
},
"script": [
{
"type": "stage_direction",
"character": "[who]",
"movement": "[what they do]"
}
]
}
}
- Build your template library for recurring scene types:
conversation_template.jsonproduct_reveal_template.jsonaction_sequence_template.jsoncultural_moment_template.json
- Create brand-specific values in a separate file:
{
"brand_name": "Your Brand",
"lighting_preference": "warm natural light",
"color_palette": ["#hexcode1", "#hexcode2"],
"camera_style": "documentary handheld",
"emotional_register": "aspirational but authentic"
}
- Fill in templates by hand, using brand values as guidelines
- Validate JSON before generating (catch syntax errors early)
- Track what works in a simple spreadsheet:
- Template used
- Values changed
- Quality score (1-10)
- Notes on what to adjust
Time investment: ~30 minutes per prompt initially, ~10 minutes once you have templates
When to use this: You’re generating 1-5 videos per project, or you’re still learning what works
Option 2: Semi-Automated (Scale Without Full Agents)
What you need:
- Python basics
- A CSV or spreadsheet with your data
- The template library from Option 1
The workflow:
import json
import csv
# Load your template
with open('templates/product_reveal_template.json') as f:
template = json.load(f)
# Load your products data
with open('products.csv') as f:
reader = csv.DictReader(f)
products = list(reader)
# Generate prompts
prompts = []
for product in products:
prompt = template.copy()
# Inject product-specific values
prompt['scene']['title'] = f"{product['name']} Reveal"
prompt['scene']['environment']['props'] = [
product['name'],
product['category'],
product['key_visual']
]
prompt['scene']['character']['name'] = f"{product['name']} User"
# Add product-specific lighting
if product['category'] == 'luxury':
prompt['scene']['style']['lighting'] = "dramatic with rim light"
else:
prompt['scene']['style']['lighting'] = "bright and accessible"
prompts.append(prompt)
# Save batch prompts
with open('output/batch_prompts.json', 'w') as f:
json.dump(prompts, f, indent=2)
Time investment: 2-3 hours to set up, then ~1 minute per prompt
When to use this: You’re generating 10+ similar videos, or you have structured data (products, locations, testimonials)
Option 3: Full Agentic System (What I Built)
What you need:
- Python environment (3.12+)
- CrewAI library
- API keys (Serper for search, Claude/GPT for LLM)
- The discipline to maintain agent definitions
The architecture:
# crew_setup.py excerpt
from crewai import Agent, Task, Crew
from crewai_tools import FileWriterTool, SerperDevTool
# Define agents
agents = [
Agent(
role="Brand Analyst",
goal="Analyze brand data and create visual mood boards",
tools=[SerperDevTool(), FileWriterTool()],
verbose=True,
allow_delegation=False
),
Agent(
role="Business Creative Synthesizer",
goal="Translate business identity into creative themes",
tools=[SerperDevTool(), FileWriterTool()],
verbose=True,
allow_delegation=True # Can ask other agents for input
),
# ... more agents
]
# Define tasks with explicit context passing
tasks = [
Task(
description=f"Analyze brand from input/brand.json...",
expected_output="Brand summary with tone, style, key traits",
agent=agents[0]
),
Task(
description="Create 3 visual themes based on brand analysis...",
expected_output="3 themed concepts with emotional framing",
agent=agents[1]
),
# ... more tasks
]
# Run the crew
crew = Crew(agents=agents, tasks=tasks, verbose=True)
result = crew.kickoff()
The key patterns in the full system:
- Scale-aware routing in
tasks.py:
def scale_to_emotional_scope(scale):
if scale in ["solo", "small"]:
return "intimacy, daily routine, personalization"
elif scale == "midsize":
return "professionalism, community trust"
elif scale == "large":
return "cinematic impact, bold visuals"
- Constraint injection from
small_business_localizer.json:
{
"business_type": "therapist",
"scene_constraints": {
"avoid": ["clients in distress", "clinical aesthetics"],
"include": ["calm décor", "natural light", "privacy cues"]
}
}
- Test mode for validation:
TEST_MODE = True # Each agent writes test output for inspection
tasks = get_tasks(agent_lookup, test_mode=TEST_MODE, brand_slug=brand_slug)
Time investment:
- Initial setup: 10-15 hours
- Per-brand setup: 5 minutes (just update
input/brand.json) - Per-run: ~5 minutes of agent orchestration
- Maintenance: ~2 hours per month to refine agents
When to use this:
- You’re generating 50+ videos across multiple brands
- You need consistent brand interpretation across teams
- You want to encode creative judgment as a repeatable system
- You’re building a service/product around video generation
Visual: The Agent Pipeline
Here’s how the agents flow information:

Key design decisions:
- No delegation for Brand Analyst: Creates stable foundation
- Delegation enabled for Creative Synthesizer: Can consult specialists
- Conditional SmallBusiness Localizer: Only activates for scale=”small”
- Progressive refinement: Each agent adds detail, never overwrites
- Test outputs at each stage: Visibility into agent reasoning
What You Should Do Next
Depending on your situation:
If you’re just exploring:
- Use Option 1 (manual templates)
- Generate 3-5 prompts for your brand
- Track what works, build intuition
If you’re scaling production:
- Start with Option 1, move to Option 2 once you have 10+ prompts
- Build your template library
- Automate the repetitive parts
If you’re building a product/service:
- Consider Option 3 (full agentic)
- Invest in agent refinement
- Document your creative judgment as code
No matter which path:
- Start with the JSON structure (it’s the leverage point)
- Build your constraint lists (avoid/include)
- Track what works in a simple system
- Iterate on single variables, not entire prompts
The patterns transfer regardless of implementation. The key insight isn’t the agents — it’s structured creative judgment as data.
Final Thoughts: This Is About More Than Video
The JSON prompting approach I’ve shown here applies beyond video generation. The same principles work for:
- Image generation (Midjourney, DALL-E, Stable Diffusion)
- Music generation (Suno, Udio)
- 3D asset creation (any prompt-based generator)
- Code generation (structured requirements → implementation)
The underlying pattern is universal:
Structured input → Consistent output → Measurable iteration
Most people are stuck in the “describe and hope” loop because they haven’t separated concerns. They’re trying to do everything in one monolithic prompt. They can’t debug because they don’t know what broke. They can’t scale because every prompt is artisanal.
JSON isn’t magic. It’s discipline made visible.
When you structure your creative judgment as data:
- Machines can execute it reliably
- Teams can collaborate on it systematically
- You can iterate on it surgically
- It becomes a compounding asset, not a consumable effort
That’s the shift.
You’re not writing prompts. You’re building creative infrastructure.
And once you see it that way, you can’t unsee it.
About This Work
This system was built to solve a real problem for Admerasia, a multicultural advertising agency that needed to create culturally-authentic video content at scale. The insights came from actually building and running the system, not from theory.
The patterns are open. The structure is reproducible. The agents are optional.
What matters is the discipline: encoding creative judgment in a way that scales.
If you build something with these patterns, I’d love to see it.
Walter Reid
AI Product Leader, Systems Designer & Business Architect
walterreid.com
LinkedIn: Designed To Be Understood or Contact Walter Reid
Repository and full code examples: Available on request for teams implementing these patterns in production.
