Skip to content

vt-c-3-build

Phase 3 - Implementation guidance with continuous knowledge capture. Reminds you to journal decisions and helps maintain quality during coding.

Plugin: core-standards
Category: Development Workflow
Command: /vt-c-3-build


Phase 3: Build - Implementation

Write code following the plan, while capturing decisions and learnings in your session journal.

Workflow Position

/activate → /vt-c-2-plan → [ /vt-c-3-build ] → /vt-c-4-review → /vt-c-5-finalize → /vt-c-complete
                              ▲ YOU ARE HERE
                              ↑ deliverable_types determines dispatch at each phase

Invocation

/vt-c-3-build                  # Build with test verification before gate
/vt-c-3-build --skip-verify    # Build without test verification loop

Execution Instructions

Step 0a: Read Phase Checkpoint

If .claude-checkpoint.md exists in the project root: 1. Read the file 2. Display: Resuming from checkpoint: {completed_phase} completed at {timestamp} 3. Display the "Context for Next Phase" section content 4. If next_phase doesn't match 3-build: warn "Checkpoint suggests /{next_phase}, but you're running /vt-c-3-build" (advisory, not blocking)

If no checkpoint exists: proceed silently.

Step 0: Check for Spec-Driven Mode

If .specify/ directory exists:

  1. Load constitution - .specify/memory/constitution.md for principles
  2. Derive active spec from branch — check .active-spec file first, then parse branch name (feature/spec-NNN-* → SPEC-NNN), then look up specs_dir in .design-state.yaml
  3. Load spec - specs/[N]-feature/spec.md for requirements
  4. Display status:
    ✓ SpecKit detected - Spec-driven build mode active
    • Implementation will be validated against spec
    • Constitution constraints enforced
    

If NO .specify/:

Building without spec constraints.
TIP: For spec-driven development, run /vt-c-specify-and-validate

Step 0.5: Project Registration Check

  1. Read TOOLKIT_ROOT/intake/projects.yaml (resolve TOOLKIT_ROOT from this skill's base directory)
  2. Check if the current working directory appears in the projects list
  3. If NOT registered and the project has a CLAUDE.md or .design-state.yaml:
  4. Ask: "This project is not registered in the toolkit. Register now to enable cross-project scanning and journal aggregation?"
  5. Options: Register now (invoke /vt-c-project-register) | Skip for now
  6. If registered: continue silently

Step 0b: Plan Prerequisite Check

If SpecKit detected AND active spec (derived from branch) has a specs_dir in .design-state.yaml:

  1. Check if specs/[N]-feature/plan.md exists
  2. If not, check if specs/[N]-feature/tasks.md exists
  3. Determine plan status:

If plan.md OR tasks.md exists:

✓ Plan loaded from specs/[N]-feature/[plan.md or tasks.md]

If NEITHER plan.md NOR tasks.md exists:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
IMPORTANT: Run /vt-c-2-plan before building
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Plan before build is mandatory (Constitution Principle II).
The /vt-c-2-plan phase researches best practices, creates
architecture decisions, and breaks the spec into executable
tasks. Building without a plan risks:
- Missing edge cases identified during research
- Architecture decisions made ad-hoc without documentation
- No task breakdown to track progress

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
RECOMMENDED: /vt-c-2-plan
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Use AskUserQuestion: - Run /vt-c-2-plan first (Recommended) — exit build, run planning phase - Proceed without plan — acknowledge and continue building

If the developer chooses to proceed without plan, note this for the phase checkpoint (plan_status: BYPASSED).

If no SpecKit or no active spec or no specs_dir: Skip this check entirely.

Step 0c: Plan Validation Gate Check

If SpecKit detected AND active spec has a specs_dir:

  1. Check if specs/[N]-feature/.plan-gate.md exists
  2. If it exists, read the status field from its YAML frontmatter
  3. Apply logic:

  4. If .plan-gate.md exists AND status is PASS → proceed silently

  5. If status is FAIL → display warning:
    ⚠ Plan validation failed. Consider re-running /vt-c-2-plan to re-validate.
    
  6. If status is SKIPPED → display note:
    Note: Plan validation was skipped during /vt-c-2-plan.
    
  7. If no .plan-gate.md → proceed silently (backwards compatible)

This is a soft gate (warning only, does not block execution).

Step 0d: Parallel Opportunity Check

If SpecKit detected AND active spec derived from branch:

  1. Read .design-state.yaml for all specs and their dependencies
  2. Compute which wave the active spec belongs to (using the wave algorithm from /vt-c-activate Step 2b):
  3. resolved = all completed specs
  4. Group remaining specs into waves by dependency level
  5. Find the wave containing the active spec
  6. Check for other specs in the same wave that are ready:
  7. Dependencies all met (completed)
  8. Not currently in_progress or completed
  9. If parallel opportunities found:
Parallel Opportunity Detected
─────────────────────────────────────────────────
You're building SPEC-036 (GSD Wave Execution).
3 other specs in the same wave have no dependencies on this one:

  SPEC-012  Design State Split         P1  ready
  SPEC-014  Unified Init               P1  ready
  SPEC-024  Review Autofix Loop        P1  ready

To work on them in parallel, open separate sessions:
  1. Open a new VS Code window or terminal tab
  2. Run: claude --worktree spec-12
  3. Then: /vt-c-activate SPEC-012 && /vt-c-2-plan && /vt-c-3-build

Each session gets its own fresh context and worktree.
─────────────────────────────────────────────────
  1. If no parallel opportunities: skip silently (do not display anything)

If no SpecKit or no active spec: Skip this check entirely.

Step 0.7: Read Deliverable Types

  1. Read deliverable_types from specs/[N]-feature/spec.md YAML frontmatter
  2. If present and non-empty: use as-is
  3. If missing: warn "No deliverable_types in spec frontmatter — defaulting to [code]" and treat as [code]
  4. Display: "Deliverable types: {list}"

The detected types determine which checklists, skills, and guidance are shown in subsequent steps.

Step 1: Load Context

  1. Check if a plan exists from /vt-c-2-plan
  2. Load docs/solutions/patterns/critical-patterns.md if it exists
  3. Reference any relevant past solutions in docs/solutions/
  4. If SpecKit: Load specs/[N]-feature/spec.md requirements (from .design-state.yaml)

Step 1a: Visual Alignment (Frontend Specs)

If SpecKit is detected and active spec loaded:

  1. Read ui_scope from specs/[N]-feature/spec.md YAML frontmatter
  2. If ui_scope is backend or absent → skip silently
  3. If ui_scope is frontend or mixed: a. Read the ## Visual Reference section from the spec b. For each referenced file path:
    • Use Read tool to load the image (PNG, JPG supported natively)
    • If file not found: warn and list the missing path, continue c. For each URL reference: note for manual review d. For text descriptions: parse inline e. Produce a "Visual Constraints" summary:
    • Layout structure (columns, grids, sections)
    • Color palette (dominant colors observed)
    • Typography (heading sizes, font weights)
    • Spacing patterns (padding, margins, gaps)
    • Component hierarchy (nesting, grouping) f. Display:
      Visual Alignment: [N] reference(s) loaded
      ─────────────────────────────────────────────────
      [Visual Constraints summary]
      ─────────────────────────────────────────────────
      These constraints will inform code generation.
      
  4. If ui_scope is frontend or mixed but no ## Visual Reference section exists:
    ⚠ Visual references missing for frontend-facing spec.
    The spec has ui_scope: [value] but no Visual Reference section.
    Proceeding without visual constraints.
    

Note: This is a build-time advisory, not a gate. Visual reference completeness is enforced at spec-creation time (Step 3b of /vt-c-spec-from-requirements) rather than at build time.

Step 1b: Initialize Step Tracking

If specs/[N]-feature/state.yaml has steps: for the active SPEC → load existing progress and display it.

If no steps: exist yet AND specs/[N]-feature/tasks.md exists: 1. Read tasks.md phase headers (lines matching ## Phase N: ...) 2. Create or update steps: map in specs/[N]-feature/state.yaml:

steps:
  1: pending   # Phase 1: Setup
  2: pending   # Phase 2: Core Mechanism
  3: pending   # Phase 3: KW Parallel
3. Display: "Initialized N steps from tasks.md"

If no steps: and no tasks.md → skip step tracking silently.

Display current step progress (if steps exist):

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Step Progress: [SPEC-ID]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Step 1: [description]        completed
Step 2: [description]        completed
Step 3: [description]        in_progress  ← resume here
Step 4: [description]        pending
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Step 1c: Angular CLI Generation Rule

If angular.json exists in the project root, display and enforce the following constraint:

Angular CLI Generation Rule (Mandatory)
─────────────────────────────────────────────────────────────────
When creating ANY new Angular artifact, you MUST use the Angular
CLI first, then implement the generated files. NEVER create
Angular artifacts by writing files directly.

Commands:
  Component:    ng g c <path/name> --skip-tests=false --style=scss
  Service:      ng g s <path/name>
  Directive:    ng g d <path/name>
  Pipe:         ng g pipe <path/name>
  Guard:        ng g guard <path/name>
  Interceptor:  ng g interceptor <path/name>

PROHIBITION: NEVER create a component with inline template: or
styles: properties. ALWAYS use templateUrl and styleUrl pointing
to external .html and .scss files generated by ng g c.

Workflow:
  1. Run the appropriate ng generate command
  2. Wait for CLI completion (generates .ts, .html, .scss, .spec.ts)
  3. Implement logic in the generated files
─────────────────────────────────────────────────────────────────

If no angular.json exists: skip this step silently.

Step 2: Display Build Guidance

When code in deliverable_types:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Phase 3: Build - Implementation Mode
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

You're now in implementation mode. I'll help you write code
following the plan from Phase 2.

When non-code types only (no code in types):

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Phase 3: Build - Content Creation Mode
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

You're now in content creation mode. I'll help you produce
deliverables following the plan from Phase 2.

Always append (all types):

IMPORTANT: Capture your decisions!
─────────────────────────────────────────────────────────────────
Use /vt-c-journal throughout this phase to record:
• Architectural decisions and why
• Problems encountered and solutions
• Deviations from the plan
• Things you learned

Example:
/vt-c-journal "Chose bcrypt over argon2 for password hashing because
         our deployment environment has native bcrypt support"

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Step 3: Show Implementation Checklist

Display the union of checklists for all types in deliverable_types. When multiple types are present, show section headers per type.

code checklist:

Code Implementation Checklist:
[ ] Write tests first (TDD) or alongside code
[ ] Validate all user input
[ ] Handle errors gracefully
[ ] No hardcoded secrets (use env vars)
[ ] Add comments for non-obvious logic
[ ] Follow existing code patterns in the project
[ ] Visual references consulted (if frontend-facing)

document checklist:

Document Creation Checklist:
[ ] Content matches outline from plan
[ ] Writing is clear and concise for target audience
[ ] Facts and figures are sourced or verifiable
[ ] Internal/external links resolve correctly
[ ] Consistent formatting (headings, lists, code blocks)
[ ] No placeholder content (TODO, TBD, [fill in])

presentation checklist:

Presentation Creation Checklist:
[ ] Narrative follows arc from plan (problem → solution → evidence → CTA)
[ ] Max 6 bullet points per slide, max 8 words per bullet
[ ] Speaker notes written for every content slide
[ ] Visual consistency (fonts, colors, alignment)
[ ] Timing appropriate for allocated duration
[ ] Corporate design compliance (logo, colors, fonts)

research checklist:

Research Checklist:
[ ] Research questions clearly stated
[ ] Methodology documented and justified
[ ] Multiple independent sources cited per key claim
[ ] Sources are current (within 2-5 years)
[ ] Conclusions traceable to evidence
[ ] Recommendations are specific and actionable

project-plan checklist:

Project Plan Checklist:
[ ] All phases from initiation to closure present
[ ] Dependencies explicitly mapped
[ ] Resources identified and not overallocated
[ ] Risks identified with mitigations
[ ] Milestones have measurable completion criteria
[ ] Stakeholder communication plan defined

Always append (all types):

Quality reminders active:

Display active quality tools based on types: - code: defense-in-depth skill, test-driven-development skill, secret-scanner hook - Non-code only: session-journal, compound-docs - Multi-type: union of all applicable

Step 4: Offer Implementation Help

How can I help you build?

Options:
1. Start implementing the first item from the plan
2. Write tests for a specific component
3. Create a specific file or module
4. Debug an issue you're encountering

Just describe what you want to build, and I'll help while
following the plan and toolkit standards.

Step 5: During Implementation

While helping the user code:

  1. Reference the plan - Stay aligned with Phase 2 decisions
  2. Apply skills automatically:
  3. defense-in-depth for input validation
  4. test-driven-development for testing
  5. systematic-debugging if issues arise
  6. Remind about journaling - After significant decisions:
    TIP: This decision is worth capturing.
    Run: /vt-c-journal "Decided to [what] because [why]"
    
  7. Watch for patterns - If implementing something similar to past solutions, reference them
  8. Update step progress - After completing all tasks in a phase (if steps: exists in specs/[N]-feature/state.yaml):
  9. Update specs/[N]-feature/state.yaml: set the completed step to completed
  10. Set the next step to in_progress
  11. Display: "Step N completed. Starting Step N+1: [description]"
  12. This persists progress across session boundaries

Step 6: Periodic Check-ins

After completing significant work, prompt:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Progress Check
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Good progress! Before continuing:

1. Have you captured important decisions with /vt-c-journal?
- When code in types: 2. Are tests written for the code so far? - When non-code types present: 2. Are deliverables meeting quality criteria from the checklist?
3. Any concerns about the approach?

When you're ready for review, run: /vt-c-4-review

Step 6.5a: Test Verification Loop

Before the build gate is marked COMPLETE, run automated test verification. This enforces "no completion claims without evidence" as the default.

If --skip-verify was passed: Skip verification. Record verification: { skipped: true, reason: "user override" } for the build gate. Proceed to Step 6.5b.

If no active spec or no state.yaml: Skip verification silently. Proceed to Step 6.5b.

Otherwise, auto-detect the test command using this priority order:

  1. package.json exists with scripts.test defined → npm test
  2. angular.json exists → npx vitest (Angular 21 default) or npm test
  3. pytest.ini or pyproject.toml with [tool.pytest] section → pytest
  4. go.mod exists → go test ./...
  5. Cargo.toml exists → cargo test
  6. Makefile exists with test target → make test
  7. Rakefile or Gemfile exists → bin/rails test or bundle exec rspec

If no test command detected:

⚠ No test command detected — skipping verification.
  Build gate will be marked COMPLETE without test evidence.
  To add tests, create a test script in package.json, pytest.ini, etc.
Record verification: { skipped: true, reason: "no test command detected" }. Proceed to Step 6.5b.

If test command detected, run the convergence loop:

iteration = 0
max_iterations = 5

while iteration < max_iterations:
  iteration += 1
  display: "Running verification (iteration {iteration}/{max_iterations}): {command}"

  Run: {command} (use Bash tool's timeout parameter set to 300000ms, not the shell `timeout` command)
  Capture exit code and output

  IF exit code == 0:
    display: "✓ Tests passed (iteration {iteration})"
    Record verification: { command, iterations: iteration, status: "pass" }
    Proceed to Step 6.5b
    BREAK

  IF exit code == 124 (timeout):
    display: "✗ Test command timed out after 5 minutes"
  ELSE:
    display: "✗ Tests failed (iteration {iteration})"
    Show relevant failure output

  IF iteration < max_iterations:
    Analyze failure output
    Attempt to fix the failing code
    Continue loop

IF iteration == max_iterations AND tests still failing:
  display:
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
    ⚠ Verification did not converge after 5 iterations
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

    Last failure:
    {failure summary}
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

  AskUserQuestion:
    - Fix manually and retry — user fixes, then re-run Step 6.5a
    - Skip verification and proceed — record verification: { skipped: true, reason: "user override after 5 failures" }
    - Abort build — stop, do not mark build gate

Step 6.5b: Persist Build Gate

When the user indicates build is complete (moving to review), persist the build gate:

  1. Derive the active spec from the branch name
  2. Read specs/[N]-feature/state.yaml
  3. Count total commits on this branch (all phases): git log main..HEAD --oneline | wc -l
  4. Collect the verification result from Step 6.5a (if it ran)
  5. Add or update the build_gate section:
    build_gate:
      status: COMPLETE
      date: "YYYY-MM-DD"
      commits: N
      verification:
        command: "npm test"
        iterations: 2
        status: "pass"
    

If verification was skipped:

build_gate:
  status: COMPLETE
  date: "YYYY-MM-DD"
  commits: N
  verification:
    skipped: true
    reason: "user override"

  1. Write the updated state.yaml
  2. Stage: git add specs/[N]-feature/state.yaml

If no active spec or no state.yaml: skip silently.

Step 6.5c: Write Phase Checkpoint

Write .claude-checkpoint.md to the project root: - completed_phase: 3-build - next_phase: 4-review - next_command: /vt-c-4-review - branch: current git branch - active_spec: derived from branch or .active-spec - review_status: check .review-gate.md (PASS/FAIL/NOT_RUN) - plan_status: check for plan.md/tasks.md (LOADED/BYPASSED/NOT_APPLICABLE) - step_progress: from state.yaml steps (if present) - clear_recommended: true - clear_reason: "Review should run with fresh context to avoid self-confirmation bias from the build conversation" - Key Outcomes: what was built, tests status - Context for Next Phase: areas to focus review on, known risks, deferred items

Display the clear recommendation per the phase-checkpoint protocol.

Step 7: Review Gate Check & Transition

When user indicates they're done or asks for review, check the review gate status:

  1. Check if .review-gate.md exists in the project root
  2. If it exists, read the branch and status fields from its frontmatter
  3. Apply logic:

IF .review-gate.md exists AND branch matches current git branch AND status is PASS:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✓ Review Already Completed
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Review passed for this branch on [date from .review-gate.md].

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
NEXT STEP: /vt-c-5-finalize
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

IF .review-gate.md is missing OR branch doesn't match OR status is FAIL:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
IMPORTANT: Run /vt-c-4-review before creating a PR
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Review is mandatory (Constitution Principle I).
The /vt-c-4-review phase dispatches type-appropriate reviewers
in parallel. Skipping review means quality issues may reach
production or delivery.

Before running /vt-c-4-review, ensure:
[ ] All planned features/content implemented
[ ] Tests written and passing (if code type)
[ ] No TODO/TBD/placeholder content left unaddressed
[ ] /vt-c-journal entries for key decisions
[ ] Safety commit made before running /simplify (when `code` in deliverable_types)
[ ] git diff reviewed after /simplify — changes accepted or reverted
[ ] Simplification improvements committed separately

Pre-PR Quality Pass ritual (when `code` is in deliverable_types):
  git commit -m "feat: implement X"   # 1. safety commit — revert point if needed
  /simplify                            # 2. quality pass (90 seconds)
  git diff                             # 3. review what changed — accept or revert
  git add -u && git commit -m "refactor: simplify"  # 4. commit improvements

Focused mode (when you know what to target):
  /simplify focus on memory efficiency
  /simplify focus on error handling
  /simplify focus on security patterns

When NOT to run /simplify:
  - Config-only changes (env vars, timeouts, port numbers)
  - Tests are failing — fix first, simplify after
  - Diff is >50 files — too large for accurate analysis
  - You intentionally deviate from CLAUDE.md standards in this change

Self-Review Pass (automatic for `code` deliverable types):
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Before external review, run a self-criticism pass on your changes.

What this does: The AI re-reads the code it just wrote through a
security/error-handling lens — a different reasoning mode than the
one used to generate the code. OpenSSF research shows this catches
~40% of vulnerabilities that generation alone misses.

How it works:
  Pass 1: Security + error handling (always runs)
  Pass 2: Edge cases + robustness (if Pass 1 found issues)
  Pass 3: Resource leaks + performance (if Pass 2 found new issues)
  Stops at convergence (no new findings) or after 3 passes.

Run: /vt-c-recursive-criticism

After RCI completes, review findings. Fix any HIGH-severity issues
before proceeding to /vt-c-4-review.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

When to SKIP /vt-c-recursive-criticism:
  - Deliverable types do not include `code`
  - Only .md, .yaml, .json files changed (no executable code)
  - Config-only changes

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
NEXT STEP: /vt-c-4-review
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

The review phase will dispatch reviewers based on deliverable_types:
• code → 6 code agents + accessibility + design (when frontend)
• document → document-quality-reviewer
• presentation → presentation-quality-reviewer
• research → research-quality-reviewer
• project-plan → project-plan-reviewer

Supporting Commands During Build

Command Use For
/vt-c-journal Capture decisions, learnings, problems
/vt-c-commit Commit with security checks
/vt-c-investigate-bug Debug unexpected behavior

Skills Active During Build

When code in deliverable_types: - defense-in-depth - Input validation patterns - test-driven-development - TDD workflow - testing-anti-patterns - Warns about mock pollution, production test-only methods - systematic-debugging - Structured debugging - root-cause-tracing - Traces bugs backward through call stack - secret-scanner - Prevents committed secrets

When non-code types present: - session-journal - Continuous decision capture - compound-docs - Pattern documentation

Multi-type specs: union of all applicable skills.

SpecKit Compliance (When Active)

If SpecKit is detected, additional checks during build:

During Implementation

  • Reference spec - Ensure features match specs/[N]-feature/spec.md requirements
  • Respect constitution - Don't violate .specify/memory/constitution.md principles
  • Track coverage - Note which spec requirements are being implemented

Spec-Aware Reminders

After implementing significant features:

Spec Checkpoint:
─────────────────────────────────────────────────────────────────
Feature implemented: [feature name]

Spec requirements addressed:
• [requirement 1 from spec.md]
• [requirement 2 from spec.md]

Constitution compliance: ✓ Verified

Consider: /vt-c-journal "Implemented [feature] per spec requirement X"

If Implementation Deviates from Spec

⚠ Spec Deviation Detected
─────────────────────────────────────────────────────────────────
Implementation differs from spec:
• Spec says: [what spec requires]
• Implementation: [what was built]

Options:
1. Update implementation to match spec
2. Update spec to reflect new requirements
3. Document deviation with rationale

Run: /vt-c-journal "Deviated from spec: [reason]"