OpenSSF Security Guide for AI Code Assistants
Summary¶
The OpenSSF Best Practices and AI/ML Working Groups published a comprehensive guide for embedding security practices into AI code assistant instructions (CLAUDE.md, Copilot instructions, Cursor rules, etc.). The guide covers input validation, secrets management, supply chain safety, platform-specific hardening, and introduces the Recursive Criticism and Improvement (RCI) pattern for self-review.
Key Details¶
- Research shows 40% of AI-generated programs contain security vulnerabilities, and 19.7% of AI-proposed packages don't exist (enabling "slopsquatting" attacks)
- Provides a ready-to-use instruction template covering parameterized queries, secret handling, error logging, dependency management, and OWASP compliance
- Introduces RCI: asking the AI to "review your previous answer and find problems" then "improve your answer" — shown to significantly improve code security
- Language-specific guidance for C/C++, Rust, Go, Python, JavaScript/TypeScript, Java, and C#
- Supply chain focus: use official package managers, lock versions, generate SBOMs, verify integrity with checksums
Why Rolf Thinks This Matters¶
We should carefully review this because the security issue will only become more severe, and we should be as well prepared as possible.