Security scanning for AI Skills.

Catch malicious code, supply-chain attacks, and prompt injection before a Skill ever reaches a user. Pure static analysis — sub-2-second response, zero LLM cost.

72built-in rules
4platform adapters
< 2sp99 scan latency
$0per scan (no LLM)

Try it in 30 seconds

# Pack a Skill bundle and POST it
zip -r my-skill.zip ./my-skill/
curl -X POST https://api.skillguard.vip/v1/scan/upload \
  -F "file=@my-skill.zip"

# Response
{
  "id": "p7EpuXvQbZOX",
  "blocked": false,
  "score": 94,
  "riskLevel": "Safe",
  "findings": [],
  "dependencies": [{ "name": "requests", "source": "python-import" }]
}

Threats covered

rm -rf / curl | sh eval() injection subprocess shell=True hardcoded API keys path traversal prompt injection crontab persistence YAML deserialization SSH/AWS credential reads DNS tunneling supply-chain typosquats

How it works

L0Structure

File-count and size limits, symlink and binary detection, YAML frontmatter validation, allowed-tools whitelist.

L1Rules

22 hard-block patterns, 50 weighted rules. Context-aware: code vs. mention. Exponential decay scoring across files.

L2Dependencies

Extracts every Python import, Node require, and env-var reference. Cross-checks PyPI / npm / Cargo whitelists.

CIAdapters

Drop-in support for Niuma, OpenClaw, MCP servers, and GPTs Actions. SARIF output for GitHub Security tab.

Fail-closed by design

If rules can't load or a scan times out, skill-guard refuses to ship a passing report. We'd rather a Skill be blocked one second longer than slip through with a fake green check.