Skip to content

claudemd-evolve

Plugin: core-standards
Category: Other
Command: /claudemd-evolve


CLAUDE.md Lifecycle Management

Keeps CLAUDE.md lean and current through four subcommands:

Subcommand Purpose
audit Measure tokens per section, propose condensations
collect Scan journals/reviews for uncovered patterns
propose Review and apply pending proposals from collect
prune Identify stale or redundant rules for removal

Step 0: Locate CLAUDE.md Files

  1. Set GLOBAL_CLAUDEMD = ~/.claude/CLAUDE.md
  2. Set PROJECT_CLAUDEMD = ./CLAUDE.md (current working directory)
  3. Read both files. If either does not exist, note it and continue with the one that exists.
  4. If neither exists, report "No CLAUDE.md files found" and exit.

Step 1: Argument Dispatch

Read the argument passed to the skill:

  • audit → go to Audit
  • collect → go to Collect
  • propose → go to Propose
  • prune → go to Prune
  • No argument or unrecognized → display this help:
/claudemd-evolve <subcommand>

Subcommands:
  audit    — Measure token usage per section, propose condensations
  collect  — Scan journals and reviews for recurring patterns
  propose  — Review and apply pending proposals from collect
  prune    — Identify stale or redundant rules for removal

Examples:
  /claudemd-evolve audit
  /claudemd-evolve collect
  /claudemd-evolve collect --days 60
  /claudemd-evolve propose
  /claudemd-evolve prune

Audit

Measures CLAUDE.md token usage and proposes condensations for sections that can be made more concise without losing rule coverage.

Audit Step 1: Token Measurement

  1. Read the global CLAUDE.md file
  2. Split content by ## headings into sections (each ## starts a new section)
  3. For each section: a. Write the section content to a temp file b. Run wc -c < tempfile to get character count c. Compute estimated tokens: chars ÷ 4
  4. Display per-section breakdown:
CLAUDE.md Token Audit: ~/.claude/CLAUDE.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Section                              Tokens
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
## Part 1: Universal Standards          450
## Part 2: Engineering Standards        320
## Part 3: Git Workflow                 280
## Part 4: Workflow Commands             60
## Part 5: Deployed Skills              150
## Before Every Response                 40
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total                                 1,300
Budget                                2,500
Status                          WITHIN BUDGET
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  1. If a project CLAUDE.md exists, repeat for that file (no fixed budget — report only)
  2. EC-4: If total is under 2,500 tokens → report "Within budget" and skip condensation proposals. Ask user if they want to audit the project CLAUDE.md instead, or exit.
  3. If over budget → proceed to Audit Step 2.

Audit Step 2: Condensation Candidate Identification

For each section over 100 tokens, analyze whether paragraph prose can become concise imperative directives:

  1. Condensable: Section has 3+ sentence paragraphs explaining rules that can be reduced to single-line directives without losing coverage.
  2. Generate a proposed condensed version
  3. Calculate token savings

  4. EC-1 — Irreducible: A rule cannot be shortened without losing meaning. Mark as "irreducible" in the report. Do not propose condensation.

  5. EC-2 — Overlapping: Two rules overlap or contradict. Flag for manual resolution with both rule texts shown. Do not auto-merge.

  6. EC-3 — Project-specific in global: A global rule is project-specific. Flag for extraction to the project CLAUDE.md with the specific text.

Display candidates:

Condensation Candidates
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

[1] ## Part 1: Universal Standards → Rule 2
    Current (42 tokens):
      "If information is missing or ambiguous, ASK for clarification.
       NEVER guess library versions, API formats, implementation details,
       business logic, or user requirements.
       State clearly: 'I need clarification on [specific point]...'
       Ask specific, targeted questions rather than proceeding with assumptions"

    Proposed (18 tokens):
      "Missing info → ASK, never guess. State what you need before proceeding"

    Savings: 24 tokens

[2] ...

Irreducible:
  • Rule 10 (Security) — cannot condense without losing specific checks

Overlapping:
  (none found)

Project-specific in global:
  (none found)

Audit Step 3: Approval Flow

  1. For each condensation candidate, use AskUserQuestion:
  2. Approve — apply this condensation
  3. Reject — keep current version
  4. Edit — modify the proposed text before applying

  5. NF-4: Before applying ANY approved edits, create a git snapshot:

    git add <claudemd-path>
    git commit -m "chore: snapshot CLAUDE.md before condensation"
    

  6. Apply approved condensations via Edit tool

  7. For each condensation applied: if the original text has useful detail, extract it to docs/claudemd-reference/<section-slug>.md with a link from the condensed rule:

    (See docs/claudemd-reference/universal-standards.md for full rationale)
    

  8. Display before/after token counts:

    Results
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
    Before:  3,200 tokens
    After:   2,350 tokens
    Saved:     850 tokens
    Budget:  2,500 tokens
    Status:  WITHIN BUDGET ✓
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
    

  9. EC-6: If user rejects ALL candidates → "No changes applied" and exit.


Collect

Scans session journals, review findings, and critical patterns for recurring themes not yet covered by CLAUDE.md rules.

Collect Step 1: Scan Sources

Scan these sources for learnings and findings:

  1. Session journals (docs/vt-c-journal/):
  2. Read entries from the last 30 days (or --days N if specified)
  3. Extract decision entries, problem entries, and learning entries

  4. Review gate (.review-gate.md):

  5. Extract review findings and pattern observations

  6. Review history (review-history.md):

  7. Extract recurring finding types across reviews

  8. Critical patterns (docs/solutions/patterns/critical-patterns.md):

  9. Extract promoted patterns

Collect Step 2: Identify Recurring Patterns

A finding is "recurring" when it appears in: - 2+ separate review sessions, OR - 3+ journal entries with similar keywords

For each finding: 1. Extract the core lesson or directive 2. Check if CLAUDE.md already covers it (grep for overlapping keywords in section headings and rule text) 3. If already covered → skip 4. If NOT covered → generate a proposal

Collect Step 3: Generate Proposals

For each uncovered recurring pattern, create a proposal:

- id: proposal-001
  source: "journal/2026-03-05, journal/2026-03-07, review-gate 2026-03-06"
  pattern: "Repeatedly forgot to check hook compatibility after skill changes"
  proposed_rule: "After modifying a skill, verify related hooks still function correctly"
  rationale: "Pattern appeared 3 times in 1 week  should be a permanent rule"
  target_section: "## Part 2: Engineering Standards"
  token_impact: "+12 tokens"

Collect Step 4: Write or Exit

  • EC-5: If no uncovered patterns found → report:

    No recurring patterns detected — CLAUDE.md is current.
    Scanned: N journal entries, M review findings, K critical patterns.
    
    Exit cleanly.

  • If proposals found → write to docs/claudemd-reference/pending-proposals.md:

    # Pending CLAUDE.md Proposals
    
    Generated: YYYY-MM-DD
    Source scan: N journal entries, M review findings
    
    ## Proposal 1: [pattern summary]
    - **Source**: [citations]
    - **Proposed rule**: [text]
    - **Rationale**: [why]
    - **Target section**: [where in CLAUDE.md]
    - **Token impact**: [+N tokens]
    
    ## Proposal 2: ...
    

  • Report: "Found N proposals. Run /claudemd-evolve propose to review and apply."


Propose

Reviews pending proposals from collect and applies approved ones to CLAUDE.md.

Propose Step 1: Load Proposals

  1. Read docs/claudemd-reference/pending-proposals.md
  2. If file does not exist or is empty → report:
    No pending proposals. Run /claudemd-evolve collect first.
    
    Exit.

Propose Step 2: Present Each Proposal

For each proposal, display:

Proposal 1 of N: [pattern summary]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Source:    journal/2026-03-05, review-gate 2026-03-06
Pattern:   [what was observed]
Proposed:  [rule text to add]
Section:   [target section in CLAUDE.md]
Impact:    +12 tokens (total would be 2,462/2,500)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Use AskUserQuestion for each: - Approve — add this rule - Reject — skip, keep in pending - Edit — modify text before adding

US-4: If adding this proposal would push total over 2,500 tokens, warn:

WARNING: Adding this rule would bring total to 2,580/2,500 tokens (+80 over budget).
Consider running /claudemd-evolve audit to condense before adding new rules.

Propose Step 3: Apply Approved Proposals

  1. NF-4: Before applying, create git snapshot:

    git add <claudemd-path>
    git commit -m "chore: snapshot CLAUDE.md before rule additions"
    

  2. Insert approved rules into the appropriate section of CLAUDE.md via Edit tool

  3. Display before/after token counts

  4. Move applied proposals from pending-proposals.md to docs/claudemd-reference/applied-proposals.md with timestamp:

    ## Applied: YYYY-MM-DD
    - [rule text] (source: [citations])
    

  5. Keep rejected proposals in pending-proposals.md for future review

  6. EC-6: If ALL proposals rejected → "No changes applied" and exit.


Prune

Identifies stale, redundant, or superseded CLAUDE.md rules and proposes removal.

Prune Step 1: Parse Rules

  1. Read CLAUDE.md
  2. Parse into individual rules:
  3. Each numbered item (e.g., ### 1. Scope Discipline) is a rule
  4. Each ### subsection within a ## section is a rule
  5. Build a list of rules with their text and section context

Prune Step 2: Apply Pruning Criteria

For each rule, check four criteria:

  1. Stale reference: Rule mentions a specific library, tool, or file path.
  2. Grep the codebase for that reference
  3. If zero matches → flag as stale with evidence:

    Rule mentions "tiktoken" but no file in the project references it
    

  4. Hook duplicate: Rule describes behavior that is enforced by a hook.

  5. Read .claude/hooks/ directory
  6. If a hook enforces the same check → flag as redundant:

    Rule "never commit secrets" is enforced by hook: .claude/hooks/pre-commit-secret-scan
    

  7. One-time resolved: Rule references a specific bug or incident.

  8. Check if the architectural fix (code change) makes the rule unnecessary
  9. Flag with evidence of the fix

  10. Superseded: Rule has the same heading prefix or 3+ keyword overlap with another rule.

  11. Flag as potentially superseded with the overlapping rule reference

Prune Step 3: Present Candidates

Prune Candidates
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

[1] Rule: "### 12. TypeScript Strictness"
    Criterion: Stale reference
    Evidence: No TypeScript files found in current project
    Savings: -35 tokens

[2] Rule: "### 10. Security — never commit secrets"
    Criterion: Hook duplicate
    Evidence: Enforced by .claude/hooks/secret-scanner
    Savings: -28 tokens

No candidates: 13 rules passed all criteria.

Prune Step 4: Approval and Removal

  1. Use AskUserQuestion for each candidate:
  2. Remove — delete this rule
  3. Keep — retain the rule
  4. Keep + note — retain with a note about why it stays despite the flag

  5. NF-4: Before removing, create git snapshot:

    git add <claudemd-path>
    git commit -m "chore: snapshot CLAUDE.md before pruning"
    

  6. Remove approved rules via Edit tool

  7. Archive removed rules to docs/claudemd-reference/archived-rules.md:

    ## Archived: YYYY-MM-DD
    
    ### Rule: [heading]
    **Reason**: [criterion — stale/duplicate/resolved/superseded]
    **Evidence**: [details]
    **Original text**:
    > [full original rule text preserved]
    

  8. Display before/after token counts

  9. EC-6: If ALL candidates rejected → "No changes applied, pruning report saved for reference" and exit.