Skip to content

compound-docs

Plugin: core-standards
Category: Knowledge Capture
Command: /compound-docs


compound-docs Skill

Purpose: Automatically document solved problems to build searchable institutional knowledge with category-based organization (enum-validated problem types).

Overview

This skill captures problem solutions immediately after confirmation, creating structured documentation that serves as a searchable knowledge base for future sessions.

Organization: Single-file architecture - each problem documented as one markdown file in its symptom category directory (e.g., docs/solutions/performance-issues/n-plus-one-briefs.md). Files use YAML frontmatter for metadata and searchability.


7-Step Process

Step 1: Detect Confirmation

Auto-invoke after phrases:

  • "that worked"
  • "it's fixed"
  • "working now"
  • "problem solved"
  • "that did it"

OR manual: /doc-fix command

Non-trivial problems only:

  • Multiple investigation attempts needed
  • Tricky debugging that took time
  • Non-obvious solution
  • Future sessions would benefit

Skip documentation for:

  • Simple typos
  • Obvious syntax errors
  • Trivial fixes immediately corrected

Step 2: Gather Context

Extract from conversation history:

Required information:

  • Module name: Which CORA module had the problem
  • Symptom: Observable error/behavior (exact error messages)
  • Investigation attempts: What didn't work and why
  • Root cause: Technical explanation of actual problem
  • Solution: What fixed it (code/config changes)
  • Prevention: How to avoid in future

Environment details:

  • Rails version
  • Stage (0-6 or post-implementation)
  • OS version
  • File/line references

BLOCKING REQUIREMENT: If critical context is missing (module name, exact error, stage, or resolution steps), ask user and WAIT for response before proceeding to Step 3:

I need a few details to document this properly:

1. Which module had this issue? [ModuleName]
2. What was the exact error message or symptom?
3. What stage were you in? (0-6 or post-implementation)

[Continue after user provides details]

Step 3: Check Existing Docs

Search docs/solutions/ for similar issues:

# Search by error message keywords
grep -r "exact error phrase" docs/solutions/

# Search by symptom category
ls docs/solutions/[category]/

IF similar issue found:

THEN present decision options:

Found similar issue: docs/solutions/[path]

What's next?
1. Create new doc with cross-reference (recommended)
2. Update existing doc (only if same root cause)
3. Other

Choose (1-3): _

WAIT for user response, then execute chosen action.

ELSE (no similar issue found):

Proceed directly to Step 4 (no user interaction needed).

Step 4: Generate Filename

Format: [sanitized-symptom]-[module]-[YYYYMMDD].md

Sanitization rules:

  • Lowercase
  • Replace spaces with hyphens
  • Remove special characters except hyphens
  • Truncate to reasonable length (< 80 chars)

Examples:

  • missing-include-BriefSystem-20251110.md
  • parameter-not-saving-state-EmailProcessing-20251110.md
  • webview-crash-on-resize-Assistant-20251110.md

Step 5: Validate YAML Schema

CRITICAL: All docs require validated YAML frontmatter with enum validation.

Validate against schema: Load schema.yaml and classify the problem against the enum values defined in yaml-schema.md. Ensure all required fields are present and match allowed values exactly.

BLOCK if validation fails:

❌ YAML validation failed

Errors:
- problem_type: must be one of schema enums, got "compilation_error"
- severity: must be one of [critical, high, medium, low], got "invalid"
- symptoms: must be array with 1-5 items, got string

Please provide corrected values.

GATE ENFORCEMENT: Do NOT proceed to Step 6 (Create Documentation) until YAML frontmatter passes all validation rules defined in schema.yaml.

Step 6: Create Documentation

Determine category from problem_type: Use the category mapping defined in yaml-schema.md (lines 49-61).

Create documentation file:

PROBLEM_TYPE="[from validated YAML]"
CATEGORY="[mapped from problem_type]"
FILENAME="[generated-filename].md"
DOC_PATH="docs/solutions/${CATEGORY}/${FILENAME}"

# Create directory if needed
mkdir -p "docs/solutions/${CATEGORY}"

# Write documentation using template from assets/resolution-template.md
# (Content populated with Step 2 context and validated YAML frontmatter)

Result: - Single file in category directory - Enum validation ensures consistent categorization

Create documentation: Populate the structure from assets/resolution-template.md with context gathered in Step 2 and validated YAML frontmatter from Step 5.

Step 7: Cross-Reference & Automated Pattern Promotion

If similar issues found in Step 3:

Update existing doc:

# Add Related Issues link to similar doc
echo "- See also: [$FILENAME]($REAL_FILE)" >> [similar-doc.md]

Update new doc: Already includes cross-reference from Step 6.

Automated Pattern Promotion (3+ Threshold):

  1. Search for similar root_cause + component:
# Count occurrences of same root cause in same component
Grep: pattern="root_cause: {from_current_doc}" path=docs/solutions/ output_mode=files_with_matches
Grep: pattern="component: {from_current_doc}" path=docs/solutions/ output_mode=files_with_matches
# Combine results: files matching BOTH criteria
  1. Count Matching Files:
# Get intersection of both Grep results
MATCHING_FILES=$(comm -12 <(sort root_cause_matches.txt) <(sort component_matches.txt))
COUNT=$(echo "$MATCHING_FILES" | wc -l)
  • If COUNT >= 3 AND severity >= high: → AUTO-PROMOTE to docs/solutions/patterns/cora-critical-patterns.md
  • If COUNT == 2: → SUGGEST in decision menu (Option 2)
  • If COUNT == 1: → Standard cross-reference only

  • Auto-Promotion Logic:

# If threshold met (3+ occurrences, high/critical severity)
if [[ $COUNT -ge 3 ]] && [[ "$SEVERITY" == "high" || "$SEVERITY" == "critical" ]]; then
  # Extract pattern name from root_cause
  PATTERN_NAME=$(echo "$ROOT_CAUSE" | sed 's/_/ /g' | awk '{for(i=1;i<=NF;i++) $i=toupper(substr($i,1,1)) tolower(substr($i,2));}1')

  # Read critical patterns file to get next number
  if [ -f docs/solutions/patterns/cora-critical-patterns.md ]; then
    Read: docs/solutions/patterns/cora-critical-patterns.md
    NEXT_NUM=$(grep -c "^### Pattern" docs/solutions/patterns/cora-critical-patterns.md)
    NEXT_NUM=$((NEXT_NUM + 1))
  else
    NEXT_NUM=1
  fi

  # Append pattern using ❌ WRONG vs ✅ CORRECT format
  cat >> docs/solutions/patterns/cora-critical-patterns.md << EOF

### Pattern $NEXT_NUM: $PATTERN_NAME

**Problem**: {extract symptom from current solution}

❌ **WRONG**:
\`\`\`
{extract code example showing the mistake from solution}
\`\`\`

✅ **CORRECT**:
\`\`\`
{extract code example showing the fix from solution}
\`\`\`

**Why**: {extract root cause explanation}

**When to watch for**: {extract prevention guidance}

**Occurrences**: $COUNT times across modules
- [{filename1}]({path1})
- [{filename2}]({path2})
- [{current_filename}]({current_path})

**Last updated**: $(date -u +"%Y-%m-%d")

EOF

  # Log to metrics/pattern-promotions.json
  Read: metrics/pattern-promotions.json

  # Append promotion entry
  ENTRY=$(cat <<EOFENTRY
{
  "pattern_name": "$PATTERN_NAME",
  "promoted_at": "$(date -u +"%Y-%m-%dT%H:%M:%SZ")",
  "occurrences": $COUNT,
  "severity": "$SEVERITY",
  "auto_promoted": true,
  "docs": [$(echo "$MATCHING_FILES" | sed 's/^/    "/;s/$/",/' | head -n -1; echo "$MATCHING_FILES" | tail -n 1 | sed 's/^/    "/;s/$/"/')}
  ]
}
EOFENTRY
)

  jq '.promotions += ['"$ENTRY"']' metrics/pattern-promotions.json > metrics/pattern-promotions.json.tmp
  mv metrics/pattern-promotions.json.tmp metrics/pattern-promotions.json

  # Notify user
  echo "
  ✓ Pattern auto-promoted: $PATTERN_NAME (3+ occurrences)

  Added to Required Reading: docs/solutions/patterns/cora-critical-patterns.md
  Pattern #$NEXT_NUM will be visible to all agents before code generation.

  This pattern occurred $COUNT times:
  $(echo "$MATCHING_FILES" | sed 's/^/  - /')
  "
fi

Manual Promotion (Option 2 in Decision Menu):

When user selects Option 2, use same process but with auto_promoted: false in metrics.

Template for critical pattern addition:

Use the template from assets/critical-pattern-template.md to structure pattern entries. Number them sequentially based on existing patterns in docs/solutions/patterns/cora-critical-patterns.md.


Decision Menu After Capture

After successful documentation, present options and WAIT for user response:

✓ Solution documented

File created:
- docs/solutions/[category]/[filename].md

What's next?
1. Continue workflow (recommended)
2. Add to Required Reading - Promote to critical patterns (cora-critical-patterns.md)
3. Link related issues - Connect to similar problems
4. Add to existing skill - Add to a learning skill (e.g., hotwire-native)
5. Create new skill - Extract into new learning skill
6. View documentation - See what was captured
7. Other
8. Propose toolkit update - Flag this as a procedure improvement for the toolkit

Handle responses:

Option 1: Continue workflow

  • Return to calling skill/workflow
  • Documentation is complete

Option 2: Add to Required Reading ⭐ PRIMARY PATH FOR CRITICAL PATTERNS

User selects this when: - System made this mistake multiple times across different modules - Solution is non-obvious but must be followed every time - Foundational requirement (Rails, Rails API, threading, etc.)

Action: 1. Extract pattern from the documentation 2. Format as ❌ WRONG vs ✅ CORRECT with code examples 3. Add to docs/solutions/patterns/cora-critical-patterns.md 4. Add cross-reference back to this doc 5. Confirm: "✓ Added to Required Reading. All subagents will see this pattern before code generation."

Option 3: Link related issues

  • Prompt: "Which doc to link? (provide filename or describe)"
  • Search docs/solutions/ for the doc
  • Add cross-reference to both docs
  • Confirm: "✓ Cross-reference added"

Option 4: Add to existing skill

User selects this when the documented solution relates to an existing learning skill:

Action: 1. Prompt: "Which skill? (hotwire-native, etc.)" 2. Determine which reference file to update (resources.md, patterns.md, or examples.md) 3. Add link and brief description to appropriate section 4. Confirm: "✓ Added to [skill-name] skill in [file]"

Example: For Hotwire Native Tailwind variants solution: - Add to hotwire-native/references/resources.md under "CORA-Specific Resources" - Add to hotwire-native/references/examples.md with link to solution doc

Option 5: Create new skill

User selects this when the solution represents the start of a new learning domain:

Action: 1. Prompt: "What should the new skill be called? (e.g., stripe-billing, email-processing)" 2. Run python3 .claude/skills/vt-c-skill-creator/scripts/init_skill.py [skill-name] 3. Create initial reference files with this solution as first example 4. Confirm: "✓ Created new [skill-name] skill with this solution as first example"

Option 6: View documentation

  • Display the created documentation
  • Present decision menu again

Option 7: Other

  • Ask what they'd like to do

Option 8: Propose toolkit update

User selects this when the documented solution reveals a gap in toolkit procedures:

Action: 1. Prompt: "Which toolkit skill should this improve?" - Suggest based on problem_type and component: - ui_bug / ux_defectpd-5-specs (missing quality criteria in specs) - a11y_violationpd-5-specs or scaffold (missing a11y patterns) - workflow_issue → relevant workflow skill (e.g., 3-build, 4-review) - best_practicescaffold (critical-patterns template) or relevant skill 2. Create docs/toolkit-proposals/ directory if it doesn't exist 3. Generate a procedure-improvement proposal file:

mkdir -p "docs/toolkit-proposals"
# Filename: YYYY-MM-DD-HHmm-[target-skill].md
4. Extract from the solution doc: - The defect category and root cause - The prevention guidance - The proposed checklist item or criteria 5. Write proposal with standardized YAML frontmatter:
---
type: procedure-improvement
date: YYYY-MM-DD
source: [project-name]
source_path: [path to this solution doc]
target_skill: [selected skill name]
severity: [from solution doc severity]
category: [missing-criteria/missing-step/wrong-default/missing-check/new-pattern]
toolkit_proposal_status: proposed
tags: [from solution doc tags]
---
6. Confirm: "Toolkit proposal created in docs/toolkit-proposals/. Run /vt-c-toolkit-review in the toolkit repo to process."


Integration Points

Invoked by: - /vt-c-compound command (primary interface) - Manual invocation in conversation after solution confirmed - Can be triggered by detecting confirmation phrases like "that worked", "it's fixed", etc.

Invokes: - None (terminal skill - does not delegate to other skills)

Handoff expectations: All context needed for documentation should be present in conversation history before invocation.


Success Criteria

Documentation is successful when ALL of the following are true:

  • ✅ YAML frontmatter validated (all required fields, correct formats)
  • ✅ File created in docs/solutions/[category]/[filename].md
  • ✅ Enum values match schema.yaml exactly
  • ✅ Code examples included in solution section
  • ✅ Cross-references added if related issues found
  • ✅ User presented with decision menu and action confirmed


Error Handling

Missing context:

  • Ask user for missing details
  • Don't proceed until critical info provided

YAML validation failure:

  • Show specific errors
  • Present retry with corrected values
  • BLOCK until valid

Similar issue ambiguity:

  • Present multiple matches
  • Let user choose: new doc, update existing, or link as duplicate

Module not in CORA-MODULES.md:

  • Warn but don't block
  • Proceed with documentation
  • Suggest: "Add [Module] to CORA-MODULES.md if not there"

Execution Guidelines

MUST do: - Validate YAML frontmatter (BLOCK if invalid per Step 5 validation gate) - Extract exact error messages from conversation - Include code examples in solution section - Create directories before writing files (mkdir -p) - Ask user and WAIT if critical context missing

MUST NOT do: - Skip YAML validation (validation gate is blocking) - Use vague descriptions (not searchable) - Omit code examples or cross-references


Quality Guidelines

Good documentation has:

  • ✅ Exact error messages (copy-paste from output)
  • ✅ Specific file:line references
  • ✅ Observable symptoms (what you saw, not interpretations)
  • ✅ Failed attempts documented (helps avoid wrong paths)
  • ✅ Technical explanation (not just "what" but "why")
  • ✅ Code examples (before/after if applicable)
  • ✅ Prevention guidance (how to catch early)
  • ✅ Cross-references (related issues)

Avoid:

  • ❌ Vague descriptions ("something was wrong")
  • ❌ Missing technical details ("fixed the code")
  • ❌ No context (which version? which file?)
  • ❌ Just code dumps (explain why it works)
  • ❌ No prevention guidance
  • ❌ No cross-references

Example Scenario

User: "That worked! The N+1 query is fixed."

Skill activates:

  1. Detect confirmation: "That worked!" triggers auto-invoke
  2. Gather context:
  3. Module: Brief System
  4. Symptom: Brief generation taking >5 seconds, N+1 query when loading email threads
  5. Failed attempts: Added pagination (didn't help), checked background job performance
  6. Solution: Added eager loading with includes(:emails) on Brief model
  7. Root cause: Missing eager loading causing separate database query per email thread
  8. Check existing: No similar issue found
  9. Generate filename: n-plus-one-brief-generation-BriefSystem-20251110.md
  10. Validate YAML:
    module: Brief System
    date: 2025-11-10
    problem_type: performance_issue
    component: rails_model
    symptoms:
      - "N+1 query when loading email threads"
      - "Brief generation taking >5 seconds"
    root_cause: missing_include
    severity: high
    tags: [n-plus-one, eager-loading, performance]
    
    ✅ Valid
  11. Create documentation:
  12. docs/solutions/performance-issues/n-plus-one-brief-generation-BriefSystem-20251110.md
  13. Cross-reference: None needed (no similar issues)

Output:

✓ Solution documented

File created:
- docs/solutions/performance-issues/n-plus-one-brief-generation-BriefSystem-20251110.md

What's next?
1. Continue workflow (recommended)
2. Add to Required Reading - Promote to critical patterns (cora-critical-patterns.md)
3. Link related issues - Connect to similar problems
4. Add to existing skill - Add to a learning skill (e.g., hotwire-native)
5. Create new skill - Extract into new learning skill
6. View documentation - See what was captured
7. Other

Future Enhancements

Not in Phase 7 scope, but potential:

  • Search by date range
  • Filter by severity
  • Tag-based search interface
  • Metrics (most common issues, resolution time)
  • Export to shareable format (community knowledge sharing)
  • Import community solutions

Open Brain Capture (Optional)

After writing the solution document, if the capture_thought MCP tool is available, capture the problem-solution pair to Open Brain for semantic search.

When to capture: After every finalized solution document.

How: 1. Check if capture_thought tool is available. If not: skip silently. 2. If the document content is empty: skip capture. 3. Call capture_thought with:

thought: "Problem: {symptom description}. Root cause: {identified cause}. Fix: {solution applied}. Prevention: {how to avoid in future}."
4. On timeout or error: log debug message and continue. Never fail the skill.

This step is always last — it never interrupts the compound-docs workflow.