vt-v-review¶
Content-level review of VMS vault files. Goes beyond schema validation to check factual accuracy of norm references, consistency of reifegrad assessments, appropriateness of VisiTrans-specific content, and German language quality. Produces per-file review findings: approve, revise, or escalate-to-human.
Plugin: vms
Category: Other
Command: /vt-v-review
VMS Content Review¶
This skill performs content-level review of VMS vault files — checking accuracy, consistency, and appropriateness beyond what automated schema validation covers.
Process¶
- Read
references/review-checklist.mdfor domain-specific review criteria - Read
references/norm-accuracy-guide.mdfor common mistakes - Read target files specified by the argument
- Review each file against the checklist:
- Factual accuracy of norm references
- Reifegrad assessment plausibility
- VisiTrans-specific content appropriateness
- Cross-reference correctness
- German language quality
- Classify each file: approve / revise / escalate-to-human
- Report findings with specific issues and recommendations
Review Criteria¶
Norm Reference Accuracy¶
- ISO 27001 control numbers match actual Annex A structure
- VDA ISA references use correct chapter numbering
- NIS2 article references are accurate
- ISO 9001 chapter numbers match actual standard structure
- DSGVO article references are correct
Reifegrad Plausibility¶
- Does the claimed reifegrad match the described implementation?
- Is a reifegrad of 3+ justified for a 15-person company?
- Are there Nachweise to support the claimed level?
- Is the Beschreibung der Umsetzung consistent with the reifegrad?
VisiTrans Specificity¶
- Does the content reference actual VisiTrans systems and tools?
- Are supplier references accurate (from the supplier directory)?
- Is the scope appropriate for a SaaS company?
- Are generic boilerplate statements replaced with specific details?
Cross-Reference Consistency¶
- Do wiki-links point to the correct documents?
- Are norm_refs consistent between related files?
- Do TOM measures align with ISMS controls they reference?
Impact Analysis & Cascade Awareness¶
Run the impact analyzer on the file under review:
Resolve the script path relative toTOOLKIT_ROOT.
For each referencing file:
- Read its review_date from YAML frontmatter
- Compare against the reviewed file's last git modification date (git log -1 --format=%ai -- "<target_file>")
- If a referencing file's review_date is older than the reviewed file's last modification: flag as "stale review_date — may need consistency check"
Report: - "N Dateien referenzieren diese Datei. Davon haben M ein aelteres review_date als die letzte Aenderung." - List the M files with their review_date and the reviewed file's modification date - This is informational — include in the review findings as advisory
Language Quality¶
- Professional German business language
- No English mixed in (except technical terms)
- Consistent terminology across files
- No placeholder text or template markers remaining
Classification¶
| Rating | Meaning | Action |
|---|---|---|
| approve | Content is accurate, complete, and appropriate | Ready for commit |
| revise | Minor issues found, can be fixed automatically | Agent can fix |
| escalate-to-human | Requires human judgment (reifegrad, legal claims, VisiTrans specifics) | Flag for user review |
Output Format¶
## Review Results — YYYY-MM-DD
### Summary
| Rating | Count |
|--------|-------|
| Approve | N |
| Revise | N |
| Escalate | N |
### File Details
#### [filename.md] — APPROVE
No issues found.
#### [filename.md] — REVISE
- [ ] Issue: [description]
- [ ] Fix: [recommendation]
#### [filename.md] — ESCALATE
- Reason: [why human judgment needed]
- Question: [specific question for the user]
Open Brain Capture (Optional)¶
After completing the review, if the capture_thought MCP tool is available, capture the review outcome.
How:
1. Check if capture_thought tool is available. If not: skip silently.
2. Call capture_thought with: