Be clear and direct
- What the task results will be used for
- What audience the output is meant for
- What workflow the task is part of
- The end goal or what successful completion looks like
Context helps Claude make better decisions and produce more appropriate outputs.
Vague: "Help with the report" Specific: "Generate a markdown report with three sections: Executive Summary, Key Findings, Recommendations"
Vague: "Process the data" Specific: "Extract customer names and email addresses from the CSV file, removing duplicates, and save to JSON format"
Specificity eliminates ambiguity and reduces iteration cycles.
<workflow>
1. Extract data from source file
2. Transform to target format
3. Validate transformation
4. Save to output file
5. Verify output correctness
</workflow>
Sequential steps create clear expectations and reduce the chance Claude skips important operations.
Problems: - What counts as PII? - What should replace PII? - What format should the output be? - What if no PII is found? - Should product names be redacted?
<objective>
Anonymize customer feedback for quarterly review presentation.
</objective>
<quick_start>
<instructions>
1. Replace all customer names with "CUSTOMER_[ID]" (e.g., "Jane Doe" → "CUSTOMER_001")
2. Replace email addresses with "EMAIL_[ID]@example.com"
3. Redact phone numbers as "PHONE_[ID]"
4. If a message mentions a specific product (e.g., "AcmeCloud"), leave it intact
5. If no PII is found, copy the message verbatim
6. Output only the processed messages, separated by "---"
</instructions>
Data to process: {{FEEDBACK_DATA}}
</quick_start>
<success_criteria>
- All customer names replaced with IDs
- All emails and phones redacted
- Product names preserved
- Output format matches specification
</success_criteria>
Why this is better: - States the purpose (quarterly review) - Provides explicit step-by-step rules - Defines output format clearly - Specifies edge cases (product names, no PII found) - Defines success criteria
The unclear version leaves all these decisions to Claude, increasing the chance of misalignment with expectations.
Add login endpoint and token validation middleware
</output>
</example>
<example number="2">
<input>Fixed bug where dates displayed incorrectly in reports</input>
<output>
Use UTC timestamps consistently across report generation
</output>
</example>
Follow this style: type(scope): brief description, then detailed explanation.
</commit_message_format>
Claude learns patterns from examples more reliably than from descriptions.
❌ "Should probably..." - Unclear obligation ✅ "Must..." or "May optionally..." - Clear obligation level
❌ "Generally..." - When are exceptions allowed? ✅ "Always... except when..." - Clear rule with explicit exceptions
❌ "Consider..." - Should Claude always do this or only sometimes? ✅ "If X, then Y" or "Always..." - Clear conditions
✅ Clear:
<validation>
Always validate output before proceeding:
```bash
python scripts/validate.py output_dir/
If validation fails, fix errors and re-validate. Only proceed when validation passes with zero errors.
</example>
</avoid_ambiguity>
<define_edge_cases>
<principle>
Anticipate edge cases and define how to handle them. Don't leave Claude guessing.
</principle>
<without_edge_cases>
```xml
<quick_start>
Extract email addresses from the text file and save to a JSON array.
</quick_start>
Questions left unanswered: - What if no emails are found? - What if the same email appears multiple times? - What if emails are malformed? - What JSON format exactly?
<quick_start>
Extract email addresses from the text file and save to a JSON array.
<edge_cases>
- **No emails found**: Save empty array `[]`
- **Duplicate emails**: Keep only unique emails
- **Malformed emails**: Skip invalid formats, log to stderr
- **Output format**: Array of strings, one email per element
</edge_cases>
<example_output>
```json
[
"user1@example.com",
"user2@example.com"
]
<output_format>
Generate a markdown report with this exact structure:
```markdown
# Analysis Report: [Title]
## Executive Summary
[1-2 paragraphs summarizing key findings]
## Key Findings
- Finding 1 with supporting data
- Finding 2 with supporting data
- Finding 3 with supporting data
## Recommendations
1. Specific actionable recommendation
2. Specific actionable recommendation
## Appendix
[Raw data and detailed calculations]
Requirements: - Use exactly these section headings - Executive summary must be 1-2 paragraphs - List 3-5 key findings - Provide 2-4 recommendations - Include appendix with source data
</specific_format>
</output_format_specification>
<decision_criteria>
<principle>
When Claude must make decisions, provide clear criteria.
</principle>
<no_criteria>
```xml
<workflow>
Analyze the data and decide which visualization to use.
</workflow>
Problem: What factors should guide this decision?
<workflow>
Analyze the data and select appropriate visualization:
<decision_criteria>
**Use bar chart when**:
- Comparing quantities across categories
- Fewer than 10 categories
- Exact values matter
**Use line chart when**:
- Showing trends over time
- Continuous data
- Pattern recognition matters more than exact values
**Use scatter plot when**:
- Showing relationship between two variables
- Looking for correlations
- Individual data points matter
</decision_criteria>
</workflow>
Benefits: Claude has objective criteria for making the decision rather than guessing.
Problems: - Are all three content types required? - Are visualizations optional or required? - How long is "too long"?
<requirements>
<must_have>
- Financial data (revenue, costs, profit margins)
- Customer metrics (acquisition, retention, lifetime value)
- Market analysis (competition, trends, opportunities)
- Maximum 5 pages
</must_have>
<nice_to_have>
- Charts and visualizations
- Industry benchmarks
- Future projections
</nice_to_have>
<must_not>
- Include confidential customer names
- Exceed 5 pages
- Use technical jargon without definitions
</must_not>
</requirements>
Benefits: Clear priorities and constraints prevent misalignment.
Problem: When is this task complete? What defines success?
<objective>
Process the CSV file and generate a summary report.
</objective>
<success_criteria>
- All rows in CSV successfully parsed
- No data validation errors
- Report generated with all required sections
- Report saved to output/report.md
- Output file is valid markdown
- Process completes without errors
</success_criteria>
Benefits: Clear completion criteria eliminate ambiguity about when the task is done.
If a human with minimal context struggles, Claude will too.
✅ Clear:
<quick_start>
<data_cleaning>
1. Remove rows where required fields (name, email, date) are empty
2. Standardize date format to YYYY-MM-DD
3. Remove duplicate entries based on email address
4. Validate email format (must contain @ and domain)
5. Save cleaned data to output/cleaned_data.csv
</data_cleaning>
<success_criteria>
- No empty required fields
- All dates in YYYY-MM-DD format
- No duplicate emails
- All emails valid format
- Output file created successfully
</success_criteria>
</quick_start>
✅ Clear:
<quick_start>
<function_specification>
Write a Python function with this signature:
```python
def process_user_input(raw_input: str) -> dict:
"""
Validate and parse user input.
Args:
raw_input: Raw string from user (format: "name:email:age")
Returns:
dict with keys: name (str), email (str), age (int)
Raises:
ValueError: If input format is invalid
"""
Requirements: - Split input on colon delimiter - Validate email contains @ and domain - Convert age to integer, raise ValueError if not numeric - Return dictionary with specified keys - Include docstring and type hints