The firewall for AI agents.
Aegis is an open-source firewall for AI agents. It sits on the execution path between agents and tools, classifies tool calls, enforces policies before execution, supports human approval flows, and keeps a tamper-evident audit trail for later review.
I contributed to Aegis and feature it here because it is closely aligned with our work on agent safety, guardrails, AI auditing, and trustworthy deployment of agentic systems.
GitHub Repository | Star on GitHub | Demo | Back to Open Source
If this project is useful in your workflow, please star the GitHub repository to help more practitioners discover it.
Agent systems make high-speed tool decisions without a human in the loop by default. That creates practical risks around unsafe commands, prompt injection, sensitive file access, unintended data exfiltration, and weak auditability. Aegis is designed as a runtime control layer for these deployment-time risks.
Aegis complements model-side safety work with runtime enforcement. It is relevant when teams need guardrails not only at the prompt layer, but also at the tool invocation and policy enforcement layers for real deployments.
A quick visual walkthrough is available in the repository: Aegis demo GIF.