Securing Moltbook Agents
Moltbook is a notebook-style agent framework. This guide covers AgentCop integration and the specific security patterns to watch for.
Moltbook security model
Moltbook's notebook execution model creates security challenges that differ from standard agent frameworks. Understanding how cells interact is essential before adding controls.
- Notebook agents execute cells sequentially — similar to Jupyter. Each cell's outputs become inputs available to later cells.
- Each cell can be Python, shell, or a tool call — the execution model is heterogeneous, which increases the attack surface compared to pure-Python agents.
- Shared state is the primary risk — a compromised cell can influence subsequent cells via shared state. This is different from a single agent call; the injection persists through the notebook run.
Scanning Moltbook notebooks
Export or zip your notebook and submit it to the scanner. The scanner inspects all cells for injection patterns, exec calls, and shared mutable state vulnerabilities.
# Scan a .moltbook file or export as Python first
import httpx
result = httpx.post("https://api.agentcop.live/api/scan/zip",
files={"file": open("my_notebook.zip", "rb")},
data={"description": "Moltbook data processing agent"}
).json()
Common Moltbook issues
The most dangerous pattern in Moltbook is the combination of unvalidated input stored to shared state, then that state used in an LLM prompt, then the LLM output executed. Each step compounds the previous one.
# Issue 1: Shared mutable state between cells
shared_memory = {} # Cells modify this — injection can persist across cells
# Cell 1 — processes user input
shared_memory["last_query"] = user_input # LLM01: unvalidated input stored
# Cell 3 — uses shared state in LLM call
prompt = f"Based on {shared_memory['last_query']}, generate code" # Injection propagates
result = llm.invoke(prompt)
exec(result) # LLM02: executes generated code
Safe Moltbook pattern
Replace mutable shared state with immutable function parameters. Validate input at the boundary. Never execute LLM output — parse or display it as text instead.
# Immutable cell inputs — pass parameters, don't share mutable state
def process_cell(query: str, llm) -> str:
# Validate input
if len(query) > 500 or any(c in query for c in ['`', '$', ';']):
raise ValueError("Query contains potentially dangerous characters")
# Parameterized prompt
template = "Summarize the following query in plain English: {query}"
return llm.invoke(template.format(query=query))
# Never exec() LLM output in notebooks
result_text = process_cell(user_query, llm)
# parse/display result_text — don't exec() it
AgentCop scan of Moltbook via CLI
If you prefer the CLI workflow, export the notebook to Python first and scan the exported file directly.
# Export notebook as Python and scan
moltbook export my_notebook.moltbook --format python > exported.py
agentcop scan exported.py --output json