AgentSentinel wraps any AI agent with spend limits, human-in-the-loop approvals, security controls, and real-time audit logging — in under 60 seconds, with no infrastructure changes.
📦
Open Source SDK
MIT license — read, fork, and contribute
🔌
Framework Agnostic
Wraps any Python or TypeScript function
🚀
Zero Infrastructure
Pure SDK — no proxies, sidecars, or cloud required
Sound Familiar?
These aren't edge cases. They happen to every team running AI agents in production.
A retry loop ran for 6 hours while you slept. You woke up to a $5,247 OpenAI invoice — your entire month's allocation, gone.
Your customer-facing bot sent 47 Slack messages at 3am, spamming your entire org — including your CEO — with half-formed drafts.
You gave an agent database access to "clean up old records." It deleted 3 months of production data. No backup. No undo.
Core Features
Wrap your agent tools with AgentSentinel and get spend controls, approval gates, and a full audit trail instantly.
Set daily and hourly budgets per agent or per workspace. When a limit is hit, the agent auto-pauses — or alerts you and keeps running at a reduced rate. Your call.
Define action patterns that require a human to approve before execution. Get a Slack or email notification with full context — approve or deny in one click.
Every action, every cost, every decision — timestamped and stored. Export for compliance, review anomalies in the dashboard, or stream to your own data warehouse.
Cap how many times a tool can fire per minute, hour, or session. Automatic anomaly detection flags unusual spikes before they spiral — with a Slack ping, not a surprise bill.
Built for agents with real tool access — shell commands, file system, API calls.
Permanently block catastrophic tools, auto-redact API keys and passwords from audit logs,
and enable sandbox_mode for untrusted agent code.
Security
OpenClaw agents wield real power — shell execution, file I/O, API calls. AgentSentinel provides layered, overlapping controls so no single failure can cause catastrophic damage. Full security details →
Agents only access explicitly approved tools. Wildcard patterns (delete_*,
write_file_*) give granular control.
Everything else is default-deny.
Multiple independent layers: budget limits + rate limits + approval gates + audit logs. No single point of failure. Graceful degradation when any limit is hit.
Complete trail of every tool invocation — timestamps, parameters, costs, decisions.
Exportable for compliance and forensics via the pluggable AuditSink interface.
SDK never logs sensitive parameters by default. Configurable redaction patterns scrub API keys, passwords, and bearer tokens from all audit output before they touch any sink.
Per-agent budget isolation and per-session rate limiting. Each guard instance is independent — no cross-agent data leakage or shared state between concurrent agent runs.
See the full OpenClaw integration guide — protecting shell execution, file writes, and API calls with sandbox mode enabled.
Quick Start
Works with OpenAI, Anthropic, LangChain, AutoGen, CrewAI, OpenClaw, and any Python or Node.js agent.
1 · Install
pip install agentsentinel
npm install @agentsentinel/sdk
2 · What you get
from agentsentinel import AgentGuard, AgentPolicy
# 1. Define your safety policy
policy = AgentPolicy(
daily_budget=10.00, # hard stop at $10/day
hourly_budget=2.00, # never more than $2/hour
require_approval=[
"delete_*", # any destructive action
"send_email", # outbound comms
"execute_sql", # database writes
],
rate_limits={"search_web": "10/min"},
audit_log=True,
)
# 2. Create a guard from your policy
guard = AgentGuard(policy=policy)
# 3. Decorate any tool that needs protection
@guard.protect(tool_name="send_email")
def send_email(to, subject, body):
# ⏸ Raises ApprovalRequiredError —
# swap in InMemoryApprover for tests.
email_client.send(to, subject, body)
Why AgentSentinel?
Every team running AI agents in production eventually discovers these problems the hard way. AgentSentinel is designed to prevent them.
Set daily and hourly limits as dataclass fields. When a limit is hit the SDK raises
BudgetExceededError
and stops execution cleanly — before the invoice arrives.
Declare which tool names require human sign-off using exact names or
delete_* wildcards.
Ships with DenyAllApprover and
InMemoryApprover.
Extend the interface for Slack / email in future releases.
Every invocation produces a typed
AuditEvent
(timestamp, tool name, decision, cost, status).
Ships with ConsoleAuditSink
and InMemoryAuditSink.
Add your own sink for persistence.
🎉 v1.0 — Now Available on PyPI & npm
AgentSentinel v1.0 is production-ready — install via pip install agentsentinel or npm install @agentsentinel/sdk.
Follow the repo for updates.
Pricing
Start free, scale as you grow
FAQ
@guard.protect decorator
with any Python callable or the TypeScript guard.protect(fn) wrapper.
First-class LangChain, AutoGen, and CrewAI adapters are on the roadmap.
cost= parameter
you pass to @guard.protect, or by a
cost_estimator function you provide.
Automatic token-count-based estimation (reading from OpenAI / Anthropic API responses) is on the roadmap.
BudgetExceededError and stops execution cleanly.
You can configure it to: (a) pause and wait for a human to approve resumption,
(b) send an alert and throttle to a lower rate, or (c) hard-stop and page your on-call engineer.
All modes produce a full audit event.
AuditSink interface.
metadata field on an
AuditEvent.
The SDK is open source — you can read exactly what is collected.
SecurityConfig with
blocked_tools (permanent kill-list),
sensitive_tools (always require approval),
and set sandbox_mode=True.
See the OpenClaw integration example for a complete production-ready setup.
require_approval list
with an ApprovalHandler that requires human sign-off
so any suspicious call is human-reviewed before it runs.
AuditSink instances you configure — in-memory, local file, or your own infrastructure.
Parameter values are redacted by default using configurable regex patterns in
SecurityConfig.redact_patterns.
Install the SDK, wrap your tools, and sleep a little better at night.