🚧 Developer Preview — v0.1.0
AgentSentinel is functional, installable scaffolding. The Python and TypeScript SDKs work as documented below. Features marked roadmap are not yet implemented. Follow the repo for progress.
Installation
# Requires Python 3.10+
pip install agentsentinel
The Python SDK has zero runtime dependencies — it uses only the Python standard library.
The TypeScript SDK likewise has no runtime dependencies and targets ES2020 / Node.js 18+.
Note: PyPI and npm packages are not yet published. In the meantime, install directly from the repository:
pip install git+https://github.com/ordocaelum/agentsentinel-landing.git#subdirectory=python
Quick Start
from agentsentinel import AgentGuard, AgentPolicy, ApprovalRequiredError, BudgetExceededError
# 1. Define a policy
policy = AgentPolicy(
daily_budget=10.0,
hourly_budget=2.0,
require_approval=["send_email", "delete_*"],
rate_limits={"search_web": "10/min"},
audit_log=True,
)
# 2. Create a guard
guard = AgentGuard(policy=policy)
# 3. Protect your tools
@guard.protect(tool_name="search_web", cost=0.01)
def search_web(query: str) -> str:
return f"Results for: {query}"
@guard.protect(tool_name="send_email")
def send_email(to: str, subject: str, body: str):
print(f"Sending to {to}: {subject}")
# 4. Use your tools — guardrails are applied automatically
result = search_web("AI safety") # ✓ allowed
try:
send_email("user@example.com", "Hi", "Test")
except ApprovalRequiredError as e:
print(f"Blocked: {e}") # ⏸ requires approval
Examples
The examples/ directory contains
runnable demos for each SDK:
Python quickstart
examples/python_quickstart.py
Demonstrates all features: budget, approvals, rate limits, and audit logging.
cd python && pip install -e .
python ../examples/python_quickstart.py
TypeScript quickstart
examples/typescript_quickstart.ts
Parallel TypeScript demo — same features as the Python example.
cd typescript && npm install && npm run build
npx ts-node ../examples/typescript_quickstart.ts
API Reference
AgentPolicy
Configuration dataclass (Python) / class (TypeScript) for all safety controls. Pass to AgentGuard.
| Parameter | Type | Description |
|---|---|---|
daily_budget |
float | Max cumulative cost (USD) per day. Default: unlimited. |
hourly_budget |
float | Max cumulative cost (USD) per rolling hour. Default: unlimited. |
require_approval |
list[str] | Tool name patterns requiring human approval. Supports fnmatch wildcards like delete_*. Default: empty. |
rate_limits |
dict[str, str] | Per-tool rate limit strings. Keys support wildcards. Values: "10/min", "100/hour". |
audit_log |
bool | Enable audit logging. Default: True. |
alert_channel |
str | Alert destination. Currently: "console". Slack/webhook on roadmap. Default: "console". |
cost_estimator |
callable | None | Optional (tool_name, kwargs) → float function. Used when no explicit cost= is provided. |
AgentGuard
Main guard class. Wraps tools with all policy enforcement.
AgentGuard(policy, approval_handler=None, audit_logger=None)
Construct a guard. Optionally supply an ApprovalHandler and/or AuditLogger. Defaults to DenyAllApprover and ConsoleAuditSink.
guard.protect(func=None, *, tool_name=None, cost=None)
Decorator that enforces all policy rules. Supports three forms:
# Form 1: plain decorator (tool_name = function name)
@guard.protect
def my_tool(): ...
# Form 2: with options
@guard.protect(tool_name="my_tool", cost=0.05)
def my_tool(): ...
# Form 3: wrap an existing function
protected = guard.protect(existing_fn, tool_name="existing_fn")
guard.daily_spent / guard.hourly_spent
Read-only properties returning cumulative cost (USD) for the current day / hour.
guard.reset_costs()
Reset all cost accumulators to zero. Useful in tests.
Errors
All errors inherit from AgentSentinelError.
BudgetExceededError
Raised when a tool call would exceed the daily or hourly budget. Has .budget and .spent attributes.
ApprovalRequiredError
Raised when a tool matches a require_approval pattern and the approval handler denies or raises. Has .tool_name.
RateLimitExceededError
Raised when a tool exceeds its configured rate limit. Has .tool_name and .limit.
Audit
Every invocation produces an AuditEvent broadcast to all registered AuditSink instances.
AuditEvent
Fields: timestamp, tool_name, status, cost, decision, metadata.
Decision values: allowed · blocked_budget · blocked_rate_limit · approval_required · approved · error
ConsoleAuditSink
Prints a one-line summary of each event to stdout. Default sink.
InMemoryAuditSink
Accumulates events in .events list. Ideal for tests.
sink = InMemoryAuditSink()
logger = AuditLogger(sinks=[sink])
guard = AgentGuard(policy=policy, audit_logger=logger)
# ... run some tools ...
for event in sink.events:
print(event.tool_name, event.decision)
AuditLogger
Manages a list of sinks. Use .add_sink() / .remove_sink() to configure dynamically. Implement AuditSink for custom destinations (file, database, HTTP).
Approval
Approval handlers decide whether a tool matching require_approval may proceed.
DenyAllApprover (default)
Raises ApprovalRequiredError for every request. Use this in production until a real approval channel is configured.
InMemoryApprover(approved_tools=set())
Pre-approves specific tool names. Call .approve(name) / .revoke(name) dynamically.
approver = InMemoryApprover(approved_tools={"send_email"})
guard = AgentGuard(policy=policy, approval_handler=approver)
ApprovalHandler (interface)
Implement request_approval(tool_name, **kwargs) → bool to add Slack/email/webhook approval flows.
RateLimiter
Sliding-window per-tool rate limiter. Configured via AgentPolicy.rate_limits; used automatically by AgentGuard.
Rate limit string format
"10/min" — 10 calls per minute
"100/hour" — 100 calls per hour
"5/sec" — 5 calls per second
"*" key — applies to all tools not matched by a more specific pattern
Framework Integrations
AgentSentinel ships first-class integrations for all major AI agent frameworks. None of the integrations require the target framework to be installed at import time — the dependency is only needed when you actually call the integration.
LangChain
Wraps every tool in a LangChain AgentExecutor or a bare list of tools.
from agentsentinel import AgentGuard, AgentPolicy
from agentsentinel.integrations.langchain import protect_langchain_agent
policy = AgentPolicy(daily_budget=5.0, require_approval=["send_email"])
guard = AgentGuard(policy=policy)
executor = protect_langchain_agent(executor, guard)
executor.invoke({"input": "..."})
AutoGen
Wraps AutoGen function_map dicts and individual callables.
from agentsentinel import AgentGuard, AgentPolicy
from agentsentinel.integrations.autogen import AutoGenGuard
policy = AgentPolicy(daily_budget=5.0)
ag_guard = AutoGenGuard(AgentGuard(policy))
# Protect a function_map
safe_map = ag_guard.protect_function_map({
"run_sql": run_sql,
"send_email": send_email,
})
CrewAI
Protect an entire Crew or wrap individual tools with the @crewai_guard.tool decorator.
from agentsentinel import AgentGuard, AgentPolicy
from agentsentinel.integrations.crewai import CrewAIGuard, protect_crew
policy = AgentPolicy(daily_budget=5.0, require_approval=["send_email"])
guard = AgentGuard(policy)
crewai_guard = CrewAIGuard(guard)
# Option 1: decorator
@crewai_guard.tool(cost=0.01)
def search_web(query: str) -> str:
return search(query)
# Option 2: protect a whole Crew
protected_crew = protect_crew(crew, guard=guard)
result = protected_crew.kickoff()
LlamaIndex
Protect LlamaIndex agents, FunctionTool objects, and QueryEngine instances.
from agentsentinel import AgentGuard, AgentPolicy
from agentsentinel.integrations.llamaindex import LlamaIndexGuard, protect_query_engine
policy = AgentPolicy(daily_budget=10.0, model_budgets={"gpt-4o": 5.0})
guard = AgentGuard(policy)
llama_guard = LlamaIndexGuard(guard)
@llama_guard.tool(model="gpt-4o", cost=0.02)
def query_knowledge_base(query: str) -> str:
return kb.query(query)
# Wrap a QueryEngine
protected_engine = protect_query_engine(engine, guard=guard)
OpenAI Assistants API
Wrap function maps used in the OpenAI Assistants API run loop with full policy enforcement.
from agentsentinel import AgentGuard, AgentPolicy
from agentsentinel.integrations.openai_assistants import protect_function_map
policy = AgentPolicy(daily_budget=20.0, require_approval=["send_email"])
guard = AgentGuard(policy)
protected = protect_function_map(
{"get_weather": get_weather, "send_email": send_email},
guard=guard,
default_model="gpt-4o",
)
# In your run loop:
result = protected[fn_name](**fn_args)
Anthropic Claude Tools
Protect tool_use handler maps for Anthropic's Claude model family.
from agentsentinel import AgentGuard, AgentPolicy
from agentsentinel.integrations.anthropic_tools import protect_tool_handlers
policy = AgentPolicy(daily_budget=15.0, model_budgets={"claude-3-5-sonnet": 10.0})
guard = AgentGuard(policy)
handlers = protect_tool_handlers(
{"get_weather": get_weather, "search_web": search_web},
guard=guard,
model="claude-3-5-sonnet",
)
# In your message loop:
for block in response.content:
if block.type == "tool_use":
result = handlers[block.name](**block.input)
| Framework | Guard class | One-liner |
|---|---|---|
| LangChain | LangChainGuard |
protect_langchain_agent(executor, guard) |
| AutoGen | AutoGenGuard |
protect_function_map(fn_map, guard) |
| CrewAI | CrewAIGuard |
protect_crew(crew, guard=guard) |
| LlamaIndex | LlamaIndexGuard |
protect_agent(agent, guard=guard) |
| OpenAI Assistants | OpenAIAssistantsGuard |
protect_function_map(fns, guard=guard) |
| Anthropic Tools | AnthropicToolsGuard |
protect_tool_handlers(handlers, guard=guard) |
What's Next
v0.2 — Adapters
- • Slack approval handler
- • Webhook / HTTP sink
- • LangChain tool adapter
v0.3 — Cost tracking
- • Auto token-count estimation (OpenAI, Anthropic)
- • UTC midnight daily counter reset
- • Persistent file audit sink
v0.4 — Frameworks
- • AutoGen function wrapper
- • CrewAI task guard
- • LlamaIndex tool wrapper
v1.0 — Dashboard
- • Local audit review UI
- • PyPI + npm package publication
- • Self-hosted server option
Want to influence the roadmap? Drop us a line or open a discussion on GitHub.