The Prompt Engineering Playbook
- 60-Second Proof of Power
- Core Philosophy: The Chef Analogy
- The C.A.F.E. Framework
- The 80/20 Rule of Prompting
- Five Power Techniques
- Speed Vs Quality Spectrum
- Reliability Toolkit
- Emergency Fixes
- Detailed Fix Explanations
- Too Generic → Add Specific Context
- Wrong Tone → Provide Style Example
- Making Stuff Up → Add Sources + Citations
- Wrong Format → Provide Schema + Example
- Too Verbose → Set Clear Limits
- Shallow Thinking → Use Labeled Reasoning
- Ambiguous Request → Number Requirements
- Quick Recovery Patterns
- Battle-Tested Templates
- Common Pitfalls to Avoid
- Safety and Settings
- The Meta-Prompt: Your Personal Coach
- Complete Workflows
- Summary
Transform frustrating AI interactions into powerful results.
60-Second Proof of Power
Experience the transformation yourself right now:
Basic Prompt
Write about productivity
Result: Generic, forgettable content
C.A.F.E. Method Prompt
You're a productivity expert who's coached 500+ remote workers through burnout.
Write a 200-word insight about setting boundaries, using a phone battery metaphor.
Include one counterintuitive tip that actually works.
Avoid clichés like "work-life balance."
Result: Specific, valuable, actionable content
Accuracy-First Prompt
Using ONLY these sources, summarize key productivity findings.
If a claim isn't in the sources, write "Not found in sources."
Cite inline as [S1] or [S2].
Sources:
[S1] Study shows 90-minute work blocks increase focus by 30%
[S2] Remote workers report 2.5 hours of deep work daily vs 1 hour in-office
Result: Grounded, verifiable, zero hallucination
The difference? That’s mastery. Let’s build yours.
Core Philosophy: The Chef Analogy
Think of LLMs like a master chef with every recipe ever written in their head:
- Vague request (“make food”) = random dish
- Specific request (“Thai green curry, mild, with tofu”) = exactly what you want
- Clear constraints (“nut-free, under 30 minutes”) = perfect fit
You’re not commanding a robot - you’re activating the right “recipe” from infinite possibilities.
Three Universal Truths:
- Clarity determines quality - The model matches your precision level
- Iteration beats perfection - Great prompts evolve through refinement
- Constraints create excellence - Boundaries focus creativity
The C.A.F.E. Framework
Every effective prompt follows this pattern. When factual accuracy matters, add Grounding as a fifth element.
┌─────────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐
│ Context │ ──► │ Action │ ──► │ Format │ ──► │ Examples │
│ (Who/Where) │ │ (What) │ │ (How) │ │ (Show) │
└─────────────┘ └──────────┘ └──────────┘ └───────────┘
│
▼ (For factual claims)
┌────────────────────┐
│ Grounding │
│ (Provide facts) │
└────────────────────┘
Framework Components
Context: Set the scene and role Action: Use a clear, specific verb Format: Specify the output structure Examples: Include if helpful (especially for complex formats) Grounding: Required for factual claims (see The Grounded Answer)
Real-World C.A.F.E. Example
Scenario: You need a product comparison for your team meeting.
CONTEXT: You're a senior product manager with 8 years experience at tech startups.
ACTION: Compare these two project management tools for our 15-person remote team.
FORMAT: Create a decision matrix with 5 criteria, scores 1-5, and a recommendation.
EXAMPLES:
- Criteria might include: ease of use, integrations, pricing, mobile app, customer support
- Scoring: 1=poor, 3=average, 5=excellent
GROUNDING: Use only the provided feature lists and pricing data.
Tools to compare: [Tool A details] vs [Tool B details]
Result: Structured, expert-level analysis that your team can actually use.
The 80/20 Rule of Prompting
Master just these 3 things for 80% effectiveness:
- Add a role: “You are a [specific expert]…”
- Show an example: “Like this: [example]”
- Specify format: “Provide as [structure]”
Everything else is optimization.
Five Power Techniques
Quick Technique Selector
Need | Use | Details |
---|---|---|
Clarity on requirements | Clarify-Then-Answer | Clarify-Then-Answer |
Expert perspective | Persona Activation | Persona Activation |
Specific format | Show, Don’t Tell | Show Don’t Tell (Few-Shot) |
Better through iteration | Progressive Refinement | Progressive Refinement |
Structured thinking | Labeled Reasoning | Labeled Reasoning |
Consistent output | Structured Output | Structured Output + Constraints |
Clarify-Then-Answer
When to use: Starting complex tasks, ambiguous requirements, unfamiliar domains
If anything is unclear or ambiguous, ask up to 3 specific questions first.
After I answer, provide the complete response in the requested format.
Why it works: Eliminates misunderstandings before they happen.
Persona Activation
When to use: Need expertise, specific perspective, or consistent voice
Pattern:
You are [specific expert] with [specific experience].
Your style is [characteristics].
You're helping [target audience].
[Task]
Live Example:
You are a senior data scientist who's deployed 50+ ML models in production.
Your style is pragmatic and skeptical of hype.
You're advising a startup founder with no technical background.
Should they use AI for customer churn prediction with only 500 customers?
Show Don’t Tell (Few-Shot)
When to use: Complex formats, specific style, precise outputs
Pattern:
Here's exactly what I want:
Input: [Example 1]
Output: [Perfect result 1]
Input: [Example 2]
Output: [Perfect result 2]
BAD example: [What to avoid]
Now process:
Input: [Your actual input]
Output:
Progressive Refinement
When to use: Exploring possibilities, finding optimal constraints
Start simple, add precision:
v1: "Summarize this article"
↓
v2: "Summarize focusing on actionable insights"
↓
v3: "Extract 3 actionable insights for startup founders"
↓
v4: "3 bullet points, each starting with a verb, under 15 words"
Stop Rule: When adding constraints yields <10% quality gain, you’re done.
Labeled Reasoning
When to use: Math/logic problems, compliance requirements, debugging, decision-making
When to avoid: Creative writing, style exploration, open-ended brainstorming
Pattern:
[Problem statement]
Show your work with these labels:
- Given: [what we know]
- Goal: [what we're solving]
- Approach: [method chosen]
- Steps: [brief work shown]
- Answer: [final result]
- Check: [sanity test]
Structured Output + Constraints
When to use: APIs, data processing, strict formatting needs
Pattern:
[Task]
Requirements:
- Length: [specific limit]
- Must include: [mandatory elements]
- Must avoid: [forbidden elements]
- Format: [exact structure]
- Style: [tone/voice]
- Defaults: Use null/[] for missing data
Output as [format] with no additional commentary.
Speed Vs Quality Spectrum
Choose your investment based on importance:
Time | Approach | Use When | Example |
---|---|---|---|
30 sec | Basic role + task | Quick answers, low stakes | As a marketer, summarize: [text] |
2 min | Full C.A.F.E. | Important outputs | Complete framework with all elements |
10 min | Iterate + Ground + Validate | Critical/published content | Multiple refinements with fact-checking |
Reliability Toolkit
The Grounded Answer
The canonical pattern for preventing hallucinations. Use whenever accuracy matters.
Use ONLY these sources.
If information isn't present, write "Not in sources."
Cite inline as [S1], [S2], etc.
Keep all numbers exact.
Sources:
[S1] [content]
[S2] [content]
The Schema Enforcer
For consistent structure in outputs:
Return ONLY valid JSON matching this schema:
{
"field1": "string",
"field2": "number",
"field3": ["array", "of", "strings"]
}
Rules:
- Use null for missing values
- Empty results: return []
- No additional text or markdown
The Confidence Check
For self-validation:
After your answer, add:
- Assumptions made: [list]
- Confidence: [0-100%] with reason
- Verify by: [2 specific methods]
Emergency Fixes
Problem | Instant Fix | Better Solution |
---|---|---|
Too generic | Add specific context | Include examples + constraints |
Wrong tone | “Write for [audience]” | Provide style example |
Making stuff up | “Only use provided info” | Add sources + require citations |
Wrong format | Show exact template | Provide schema + example |
Too verbose | “Maximum [N] words” | “Bullet points only” |
Shallow thinking | “Show your reasoning” | Use labeled reasoning |
Ambiguous request | Add clarify-then-answer | Number requirements explicitly |
Detailed Fix Explanations
Too Generic → Add Specific Context
- Instant Fix: “You are a [specific expert] with [specific experience]”
- Better Solution: Include role, audience, constraints, and examples
- Example: Instead of “Write about productivity,” use “You’re a productivity coach who’s helped 200+ remote workers. Write for startup founders about preventing burnout, using the phone battery metaphor, under 200 words.”
Wrong Tone → Provide Style Example
- Instant Fix: “Write for [audience] in [tone] style”
- Better Solution: Show exact tone with examples
- Example: “Match this tone: ‘Look, I’ve been there. The 2 AM panic when you realize you’ve been ‘productive’ for 12 hours but accomplished nothing meaningful.’ (Direct, empathetic, specific)”
Making Stuff Up → Add Sources + Citations
- Instant Fix: “Only use provided information”
- Better Solution: Provide sources and require inline citations
- Example: “Use ONLY these sources. Cite as [S1], [S2]. If information isn’t present, write ‘Not in sources.’ Sources: [S1] Study shows… [S2] Research indicates…”
Wrong Format → Provide Schema + Example
- Instant Fix: “Return as [format]”
- Better Solution: Show exact structure with example
- Example: “Format as JSON: {‘key’: ‘value’, ‘array’: [‘item1’, ‘item2’]}. Example: {‘name’: ‘John’, ‘skills’: [‘Python’, ‘SQL’]}”
Too Verbose → Set Clear Limits
- Instant Fix: “Maximum [N] words”
- Better Solution: Specify structure and constraints
- Example: “3 bullet points, each under 15 words, starting with action verbs”
Shallow Thinking → Use Labeled Reasoning
- Instant Fix: “Show your work step by step”
- Better Solution: Provide reasoning framework
- Example: “Show: Given → Goal → Approach → Steps → Answer → Check”
Ambiguous Request → Number Requirements
- Instant Fix: “Clarify what you need”
- Better Solution: Break down into numbered requirements
- Example: “Requirements: 1) Target audience: [specific], 2) Length: [exact], 3) Must include: [list], 4) Must avoid: [list], 5) Format: [structure]”
Quick Recovery Patterns
Wrong Direction:
Ignore above. Let me be more specific: [clearer request]
Right Idea, Wrong Execution:
Good direction, but adjust: [specific changes]
Format Drift:
Return EXACTLY this structure: [template]
Battle-Tested Templates
Decision Matrix
CONTEXT: [Situation]
OPTIONS: [A, B, C]
CONSTRAINTS: [Limitations]
Create decision matrix:
1. Generate 5 criteria relevant to this decision
2. Score each option 1-5 with one-sentence justification
3. Recommend with main tradeoff noted
Format as table with totals
Perfect Email Assistant
CONTEXT: [My role and relationship]
RECIPIENT: [Who, their personality/seniority]
GOAL: [One specific outcome after reading]
KEY POINTS: [Bullets of must-include info]
TONE: [Formal/Warm/Direct/etc.]
Draft concise email achieving the goal.
Maximum 150 words unless specified otherwise.
Code Generator with Validation
ROLE: Senior [language] developer
TASK: Implement [function/feature]
REQUIREMENTS:
- Include docstring
- Add input validation
- Handle edge cases
- Provide 2 usage examples
- Include 1 test case
- Note time/space complexity
If requirements unclear, ask first.
Grounded Analysis
SOURCES:
[S1] [data/facts]
[S2] [data/facts]
TASK: Analyze [topic] for [audience]
CONSTRAINTS:
- Use ONLY provided sources
- Cite everything as [S#]
- Say "Not in sources" if missing
FORMAT:
1. Three key findings [with citations]
2. Two gaps in data
3. One recommended action
Common Pitfalls to Avoid
The Fatal Five
- The Knowledge Assumption: Assuming context without providing it
- The Format Hope: Expecting structure without specifying
- The One-Shot Wonder: Not planning for iteration
- The Complexity Trap: Over-engineering simple requests
- The Trust Fall: Not requiring citations for facts
Additional Common Pitfalls
- The Vague Verb Trap: Using weak verbs like “help,” “improve,” “analyze”
- Fix: Use specific verbs: “compare,” “extract,” “generate,” “transform”
- Example: “Help with marketing” → “Generate 5 email subject lines for our Q4 campaign”
- The Context Dump: Overwhelming with irrelevant background
- Fix: Include only context that affects the output
- Example: Don’t include your company’s entire history when asking for a meeting agenda
- The Format Mismatch: Requesting one format but needing another
- Fix: Be honest about how you’ll use the output
- Example: If you need bullet points for a presentation, don’t ask for paragraphs
- The Constraint Confusion: Mixing requirements with preferences
- Fix: Clearly separate “must have” from “nice to have”
- Example: “Must be under 200 words” vs “Preferably include examples”
- The Feedback Loop Failure: Not learning from poor outputs
- Fix: Save good prompts, analyze what made them work
- Example: Keep a “prompt library” of your best performers
Safety and Settings
Privacy First
Never include:
- Passwords, API keys, tokens
- Personal identification (SSN, ID numbers)
- Proprietary code or data
- Private communications
Always:
- Mock sensitive data in examples
- Use placeholders for private info
- Review outputs before sharing
Model Settings Guide
Setting | Accuracy Tasks | Creative Tasks | Default |
---|---|---|---|
Temperature | 0.0-0.3 | 0.7-1.0 | 0.7 |
Max tokens | Ensure completion | Allow exploration | Model default |
Top-p | 0.1-0.5 | 0.8-1.0 | 1.0 |
High-Stakes Domains
For medical/legal/financial tasks:
- Always include: “This is a draft for expert review”
- Require step-by-step reasoning
- Request confidence levels
- Verify with multiple approaches
The Meta-Prompt: Your Personal Coach
Review and improve my prompt:
Current prompt: [your prompt]
Goal: [what you want]
Issue: [what's not working]
Provide:
1. Diagnosis: What's unclear or missing
2. Improvements: Specific changes with reasoning
3. Revised version: Complete improved prompt
4. Expected impact: How each change helps
Meta-Prompt Real-World Example
Scenario: Your prompt isn’t getting the detailed analysis you need.
Current Prompt:
Analyze this data and give me insights.
[Data: 500 customer survey responses]
Meta-Prompt Application:
Review and improve my prompt:
Current prompt: "Analyze this data and give me insights. [Data: 500 customer survey responses]"
Goal: Get actionable insights for improving customer satisfaction
Issue: Getting generic, surface-level analysis
Provide:
1. Diagnosis: What's unclear or missing
2. Improvements: Specific changes with reasoning
3. Revised version: Complete improved prompt
4. Expected impact: How each change helps
Meta-Prompt Response:
1. DIAGNOSIS:
- No specific audience (who needs these insights?)
- No clear format (what does "insights" look like?)
- No constraints (how detailed? what timeframe?)
- No context about the business goals
2. IMPROVEMENTS:
- Add role: "You're a customer experience analyst"
- Specify format: "3 key findings, 2 recommendations, 1 risk"
- Add constraints: "Focus on actionable items, under 300 words"
- Include context: "For a SaaS startup with 10K users"
3. REVISED VERSION:
"You're a customer experience analyst helping a 10K-user SaaS startup improve satisfaction.
Analyze these 500 survey responses and provide:
- 3 key findings with specific data points
- 2 actionable recommendations with implementation steps
- 1 potential risk to watch
Format as bullet points, under 300 words total."
4. EXPECTED IMPACT:
- Role gives expert perspective
- Format ensures specific structure
- Constraints focus on actionable content
- Context enables relevant recommendations
Complete Workflows
Research Pipeline
1. EXPLORE: "Generate 5 research questions about [topic]"
↓
2. GATHER: "Extract key themes from: [paste sources]"
↓
3. STRUCTURE: "Create detailed outline from themes"
↓
4. DEVELOP: "Expand section [X] with examples and data"
↓
5. VALIDATE: "Add citations in [S#] format, flag unsupported claims"
Content Enhancement Pipeline
1. ANALYZE: "Extract 5 key points from: [draft]"
↓
2. AUDIENCE: "Rewrite for [specific audience]"
↓
3. HOOK: "Create compelling opening paragraph"
↓
4. OPTIONS: "Generate 3 title variations"
↓
5. POLISH: "Final edit for clarity and flow"
Summary
The gap between mediocre and exceptional AI outputs isn’t the model - it’s the prompting. This guide provides a comprehensive framework for prompt engineering excellence:
- C.A.F.E. Framework provides the foundation for every prompt
- Five Power Techniques handle specific challenges
- Reliability Toolkit ensures accuracy and consistency
- Battle-Tested Templates accelerate common tasks