Runtime Security for AI Agents

AgentCop Documentation

Scan. Monitor. Gate. The complete security lifecycle for your agent fleet.

What is AgentCop?

AgentCop is the first runtime enforcement layer built specifically for AI agent systems. Where traditional security tools stop at static analysis or after-the-fact monitoring, AgentCop covers the full security lifecycle — from code review before deployment to live execution gating during every tool call your agents make.

01 Scan

AST-based static analysis maps your agent code against the OWASP LLM Top 10, known CVEs, and AgentCop's proprietary rule set. Runs before you deploy.

02 Monitor

Behavioral anomaly detection watches your agents at runtime — flagging unusual execution patterns, unexpected tool usage, and deviations from baseline behavior.

03 Gate

Runtime enforcement intercepts every tool call before it executes. Unauthorized operations are blocked, not just logged. This is the enforcement layer nobody else has.

5-Minute Quickstart

Get AgentCop running against your agent code in under five minutes. No account required for local scanning.

terminal
$ pip install agentcop
Collecting agentcop
Downloading agentcop-0.9.1-py3-none-any.whl (148 kB)
Successfully installed agentcop-0.9.1
 
$ agentcop scan my_agent.py
AgentCop Scanner v0.9.1
Scanning: my_agent.py
[HIGH] Unrestricted shell execution via subprocess.run
[MED] No permission scope defined for file_writer tool
[INFO] 2 issues found — trust score: 61/100
 
$ agentcop serve # starts API on :8000
AgentCop API listening on http://0.0.0.0:8000
Docs: http://0.0.0.0:8000/docs

You can also scan agents directly via the REST API. This is useful for CI/CD pipelines and integrating with existing deployment workflows:

import httpx

result = httpx.post("https://api.agentcop.live/api/scan", json={
    "code": open("my_agent.py").read(),
    "description": "LangChain ReAct agent with web search"
}).json()

print(f"Score: {result['trust_score']}/100")
print(f"Issues: {result['total_issues']}")
for issue in result['issues']:
    print(f"  [{issue['severity']}] {issue['type']}: {issue['description']}")

The scanner returns a structured JSON response with a trust score from 0–100, a full issue list with severities, types, and descriptions, and CWE/OWASP references for every finding.

Architecture Overview

AgentCop operates in three distinct layers. Each layer is independently useful, but they are designed to work together as a full security lifecycle.

LAYER 1 — PRE-DEPLOY (scanner.py) Agent source code └──▶ AST parser extracts imports, tool registrations, exec patterns └──▶ Rule engine checks against OWASP LLM Top 10 + CVE database └──▶ Trust score computed (0–100) + issue list generated └──▶ Pass / Warn / Block LAYER 2 — RUNTIME (monitor.py) Agent executing in production └──▶ Behavioral baseline established per agent identity └──▶ Tool call patterns compared against baseline └──▶ Anomalies flagged, alerts fired, audit log written └──▶ Alert / Throttle / Escalate LAYER 3 — EXECUTION GATE (gate.py) Every tool call, every time └──▶ Tool call intercepted before execution └──▶ Checked against permission scope for this agent + context └──▶ Requires approval if outside granted scope └──▶ Allow / Block / Request Approval

Next Steps

🤖
Meet AgentBob

AgentBob is our cautionary tale agent — a poorly-written LangChain ReAct agent that appears throughout these docs to demonstrate what not to do. He has no execution controls, no permission layer, and will happily comply with whatever a malicious prompt tells him. Every security concept in these docs is illustrated using AgentBob as the patient zero.

Ready to go deeper? Here is the recommended reading order: