Quickstart — 5 minutes to your first scan
Scan your first AI agent for security issues in under 5 minutes. No account required.
- Python 3.9+ — check with
python --version - pip — comes bundled with Python 3.9+
- An agent to scan — use one of your own, or follow along with AgentBob in Your First Scan
Steps
-
1
Install AgentCop
Install the AgentCop CLI and Python library from PyPI — the recommended method for most users.
$ pip install agentcopSuccessfully installed agentcop-1.0.0Prefer to run from source? Clone the repo instead:
$ git clone https://github.com/agentcop/agentcop && cd agentcop && pip install -e ".[dev]"See Installation for Docker and self-hosted server options.
-
2
Scan a file
Point AgentCop at any Python agent file. The scanner performs static analysis using AST parsing and optional AI-enhanced review.
$ agentcop scan agent.pyscanning agent.py...✓ AgentCop v1.0.0─────────────────────────────Trust Score: 62/100 [MODERATE RISK]3 issues found:[HIGH] LLM01 · Prompt InjectionLine 14: f-string interpolation in LLM promptFix: Use parameterized prompt templates[MEDIUM] LLM02 · Insecure Output HandlingLine 31: eval() called on agent outputFix: Validate and sanitize before eval[LOW] LLM08 · Unreviewed External ActionsLine 47: POST request without human approval gateFix: Add ApprovalBoundary before network callsRun 'agentcop explain <issue-id>' for detailsView full report: https://agentcop.live/scan/abc123 -
3
Review results
Each scan produces three key outputs:
- Trust Score
- A 0–100 score. Higher is safer. Scores below 40 indicate critical risks that should block deployment. Scores of 80+ are considered production-ready.
- Issues
- Each issue includes a severity level (
CRITICAL,HIGH,MEDIUM,LOW,INFO), the relevant OWASP LLM Top 10 category, the offending line number, and an actionable fix. - Severity levels
CRITICAL— immediate exploit risk, must fix before deployment.
HIGH— significant risk, fix before any shared or production use.
MEDIUM— notable weakness, fix before public exposure.
LOW— minor concern or defense-in-depth improvement.
INFO— informational, no immediate action required.
-
4
Fix issues
Each issue includes a one-line fix hint in the CLI output. For deeper guidance — including code examples and exploit scenarios — see the Scanner Engine documentation and the Your First Scan walkthrough, which patches a deliberately vulnerable agent step by step.
-
5
Add to CI
Gate your deployments on a passing Trust Score by adding AgentCop to your GitHub Actions workflow. The CLI exits with a non-zero code when the score falls below the threshold you set.
yaml# .github/workflows/agentcop.yml name: AgentCop Security Scan on: [push, pull_request] jobs: scan: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Set up Python uses: actions/setup-python@v5 with: python-version: "3.11" - name: Install AgentCop run: pip install agentcop - name: Scan agents env: ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} run: | # Fail the build if Trust Score drops below 70 agentcop scan agents/ --min-score 70
Scan via API
You can also scan code directly through the AgentCop REST API — useful for integrating into custom tooling, web UIs, or non-Python pipelines.
import httpx
response = httpx.post("https://api.agentcop.live/api/scan", json={
"code": open("agent.py").read(),
"description": "My ReAct agent"
})
result = response.json()
print(f"Trust Score: {result['trust_score']}/100")
The response includes trust_score, issues (array), scan_id, and a report_url. See the AgentCop API reference for the full schema.