v1.0 — Now Available

Your AI agents work.
Until they cost you $3k overnight.

AgentSentinel wraps any AI agent with spend limits, human-in-the-loop approvals, security controls, and real-time audit logging — in under 60 seconds, with no infrastructure changes.

No infrastructure changes OpenAI · Anthropic · Any LLM Open source — MIT license Zero external dependencies Designed for OpenClaw agents

📦

Open Source SDK

MIT license — read, fork, and contribute

🔌

Framework Agnostic

Wraps any Python or TypeScript function

🚀

Zero Infrastructure

Pure SDK — no proxies, sidecars, or cloud required

Sound Familiar?

Has your agent ever…

These aren't edge cases. They happen to every team running AI agents in production.

💸

Burned through your budget overnight

A retry loop ran for 6 hours while you slept. You woke up to a $5,247 OpenAI invoice — your entire month's allocation, gone.

"We burned our entire month's budget in one night."
😱

Embarrassed you in front of everyone

Your customer-facing bot sent 47 Slack messages at 3am, spamming your entire org — including your CEO — with half-formed drafts.

"It was not a fun Monday morning stand-up."
🔥

Did something irreversible

You gave an agent database access to "clean up old records." It deleted 3 months of production data. No backup. No undo.

"I had to call every customer to explain what happened."

Core Features

One wrapper. Total control.

Wrap your agent tools with AgentSentinel and get spend controls, approval gates, and a full audit trail instantly.

Hard Spend Limits

Set daily and hourly budgets per agent or per workspace. When a limit is hit, the agent auto-pauses — or alerts you and keeps running at a reduced rate. Your call.

policy = AgentPolicy(
  daily_budget=10.00,  # max $10/day
  hourly_budget=2.00# max $2/hour
  audit_log=True      # log all calls
)

Human-in-the-Loop Approvals

Define action patterns that require a human to approve before execution. Get a Slack or email notification with full context — approve or deny in one click.

policy = AgentPolicy(
  require_approval=[
    "delete_*",     # any delete
    "send_email"# outbound comms
    "execute_sql" # DB writes
  ]
)

Full Audit Trail

Every action, every cost, every decision — timestamped and stored. Export for compliance, review anomalies in the dashboard, or stream to your own data warehouse.

policy = AgentPolicy(
  audit_log=True,
  alert_channel="console"
)
# Extend: add your own AuditSink
guard.audit_logger.add_sink(my_sink)

Rate Limiting & Anomaly Detection

Cap how many times a tool can fire per minute, hour, or session. Automatic anomaly detection flags unusual spikes before they spiral — with a Slack ping, not a surprise bill.

policy = AgentPolicy(
  rate_limits={
    "search_web": 10/min,
    "*": 100/hour
  }
)

Security-First Design

OpenClaw Ready

Built for agents with real tool access — shell commands, file system, API calls. Permanently block catastrophic tools, auto-redact API keys and passwords from audit logs, and enable sandbox_mode for untrusted agent code.

security = SecurityConfig(
  blocked_tools=["rm_rf", "drop_database"], # hard kill list
  sensitive_tools=["execute_shell", "delete_*"], # always approve
)
policy = AgentPolicy(
  security=security, sandbox_mode=True
)

Security

Defence in depth for production agents

OpenClaw agents wield real power — shell execution, file I/O, API calls. AgentSentinel provides layered, overlapping controls so no single failure can cause catastrophic damage. Full security details →

Principle of Least Privilege

Agents only access explicitly approved tools. Wildcard patterns (delete_*, write_file_*) give granular control. Everything else is default-deny.

Defence in Depth

Multiple independent layers: budget limits + rate limits + approval gates + audit logs. No single point of failure. Graceful degradation when any limit is hit.

Audit Everything

Complete trail of every tool invocation — timestamps, parameters, costs, decisions. Exportable for compliance and forensics via the pluggable AuditSink interface.

Secrets Protection

SDK never logs sensitive parameters by default. Configurable redaction patterns scrub API keys, passwords, and bearer tokens from all audit output before they touch any sink.

Agent Isolation

Per-agent budget isolation and per-session rate limiting. Each guard instance is independent — no cross-agent data leakage or shared state between concurrent agent runs.

OpenClaw

Built for Real Tool Access

See the full OpenClaw integration guide — protecting shell execution, file writes, and API calls with sandbox mode enabled.

View OpenClaw Example →

Quick Start

Up and running in 60 seconds

Works with OpenAI, Anthropic, LangChain, AutoGen, CrewAI, OpenClaw, and any Python or Node.js agent.

1 · Install

pip install agentsentinel
npm install @agentsentinel/sdk

2 · What you get

  • Daily & hourly budget enforcement
  • Human-in-the-loop approval gates
  • Per-tool rate limiting (sliding window)
  • Structured audit log (console + in-memory)
  • Wildcard tool-name pattern matching
  • Slack / webhook alerts (roadmap)
from agentsentinel import AgentGuard, AgentPolicy

# 1. Define your safety policy
policy = AgentPolicy(
    daily_budget=10.00,        # hard stop at $10/day
    hourly_budget=2.00,       # never more than $2/hour
    require_approval=[
        "delete_*",            # any destructive action
        "send_email",          # outbound comms
        "execute_sql",         # database writes
    ],
    rate_limits={"search_web": "10/min"},
    audit_log=True,
)

# 2. Create a guard from your policy
guard = AgentGuard(policy=policy)

# 3. Decorate any tool that needs protection
@guard.protect(tool_name="send_email")
def send_email(to, subject, body):
    # ⏸ Raises ApprovalRequiredError —
    # swap in InMemoryApprover for tests.
    email_client.send(to, subject, body)

Why AgentSentinel?

Built for the problems you'll hit in production

Every team running AI agents in production eventually discovers these problems the hard way. AgentSentinel is designed to prevent them.

💸

Budget enforcement that actually works

Set daily and hourly limits as dataclass fields. When a limit is hit the SDK raises BudgetExceededError and stops execution cleanly — before the invoice arrives.

Implemented in v1.0.0

Declare which tool names require human sign-off using exact names or delete_* wildcards. Ships with DenyAllApprover and InMemoryApprover. Extend the interface for Slack / email in future releases.

Implemented in v1.0.0
📋

Structured audit trail from day one

Every invocation produces a typed AuditEvent (timestamp, tool name, decision, cost, status). Ships with ConsoleAuditSink and InMemoryAuditSink. Add your own sink for persistence.

Implemented in v1.0.0

🎉 v1.0 — Now Available on PyPI & npm

AgentSentinel v1.0 is production-ready — install via pip install agentsentinel or npm install @agentsentinel/sdk. Follow the repo for updates.

Pricing

Simple, Transparent Pricing

Start free, scale as you grow

Free

$0 /month
  • 1 agent
  • 1,000 events/month
  • Basic policy controls
  • Community support
  • Dashboard
  • Integrations
Get Started

Team

$149 /month
  • 20 agents
  • 500,000 events/month
  • Multi-agent dashboard
  • Full policy editor
  • Priority support
  • Team management
Start Free Trial

Enterprise

Custom
  • Unlimited agents
  • Unlimited events
  • SSO / SAML
  • On-premise option
  • Dedicated support
  • Custom SLA
Contact Sales

FAQ

Common questions, answered

Do I need to change my infrastructure? +
No. AgentSentinel is a pure Python / Node.js SDK — a decorator layer around your existing tools. Install the package, wrap your functions, done. There are no proxies, sidecars, or infrastructure changes required. It runs inside your Lambda, Docker container, or local script as-is.
Does it work with LangChain, AutoGen, or CrewAI? +
AgentSentinel wraps at the tool / function level, making it framework-agnostic. Use the @guard.protect decorator with any Python callable or the TypeScript guard.protect(fn) wrapper. First-class LangChain, AutoGen, and CrewAI adapters are on the roadmap.
How accurate is the cost tracking? +
In v1.0.0, cost is tracked by the explicit cost= parameter you pass to @guard.protect, or by a cost_estimator function you provide. Automatic token-count-based estimation (reading from OpenAI / Anthropic API responses) is on the roadmap.
What happens when a budget limit is hit? +
By default, the agent raises a BudgetExceededError and stops execution cleanly. You can configure it to: (a) pause and wait for a human to approve resumption, (b) send an alert and throttle to a lower rate, or (c) hard-stop and page your on-call engineer. All modes produce a full audit event.
Can I self-host the dashboard? +
The SDK itself runs entirely in-process — no cloud component is needed. A self-hosted dashboard for reviewing audit events is on the roadmap for Enterprise partners. In the meantime, direct events to your own storage by implementing the AuditSink interface.
Do you ever see my prompts or agent data? +
By default we log metadata only: timestamps, tool names, and cost estimates. Prompt content and tool arguments are never logged unless you explicitly opt in via the metadata field on an AuditEvent. The SDK is open source — you can read exactly what is collected.
Is AgentSentinel safe for production use with powerful tools (shell, file system)? +
Yes — this is the primary use case. For agents with real tool access (shell commands, file I/O, API calls), configure a SecurityConfig with blocked_tools (permanent kill-list), sensitive_tools (always require approval), and set sandbox_mode=True. See the OpenClaw integration example for a complete production-ready setup.
How do I prevent prompt injection from bypassing controls? +
AgentSentinel enforces policy at the tool call level — not the prompt level — so prompt injection cannot bypass budget limits, rate limits, or blocked-tool checks. An injected prompt that convinces an agent to call a blocked tool will still be rejected by the guard before execution. For sensitive tools, combine the require_approval list with an ApprovalHandler that requires human sign-off so any suspicious call is human-reviewed before it runs.
What data does AgentSentinel collect or transmit? +
Nothing. The SDK runs entirely in-process; it does not phone home, transmit telemetry, or contact any external service. All audit data stays inside the AuditSink instances you configure — in-memory, local file, or your own infrastructure. Parameter values are redacted by default using configurable regex patterns in SecurityConfig.redact_patterns.
🎉 v1.0 — Now Available on PyPI & npm

Stop losing money to
runaway agents.

Install the SDK, wrap your tools, and sleep a little better at night.

No sign-up required MIT license Zero external dependencies