One config to rule all your AI agents: portable, effective, safer.
anywhere-agents is a maintained, opinionated configuration that follows your AI coding agents across every project, every machine, and every session. It ships shared skills, session hooks, and a small bootstrap script so behavior stays portable (it works the same in every repository), effective (curated writing rules, task routing, and skills), and safer (a PreToolUse guard blocks destructive Git and GitHub commands). Supports Claude Code and Codex today, with plans to grow.
This is the sanitized public release of the agent config I have run daily since early 2026 across research, paper writing, and development work on macOS, Windows, and Linux. Before publishing it, every project was repeating the same setup: which skills to install, which hooks to wire up, which MCP servers to register, and which safety defaults to apply. anywhere-agents packages that setup into a single reusable configuration that stays fresh on every session.
This project sits across our Developer Tools for AI and AI Safety & Security directions, through session-start checks, a PreToolUse guard for destructive commands, and routing of review, figure, and writing tasks to the right skill.
GitHub Repository | Star on GitHub | Documentation | PyPI Package | npm Package | Back to Open Source
If this project is useful in your workflow, please star the GitHub repository to help more practitioners discover it.
AI coding agents are now used across many repositories by the same person or team. Without a shared
configuration, preferences drift: per-repo CLAUDE.md files fall out of sync, copy-pasted
settings diverge on every tweak, and safety defaults live only in people's heads and have to be re-explained
in every session. anywhere-agents publishes one curated configuration that any project can inherit in two
lines of setup. The maintainer improves one file; every consuming repository picks it up on the next session.
Five concrete scenarios capture what changes when a project adopts this config.
Run the installer once in the project root:
pipx run anywhere-agents # Python (zero-install with pipx)
npx anywhere-agents # Node.js (zero-install with Node 14+)
The next Claude Code session reads the shared AGENTS.md automatically and inherits every
default: writing style, Git safety, session checks, and skill routing. For Codex and other agents, start
the session by asking the agent to read @AGENTS.md. Per-project overrides live in
AGENTS.local.md, which bootstrap never touches.
Ask Claude Code: "review this." The router picks the implement-review skill, which
picks a content lens (code, paper, proposal, or general) based on the staged diff. Codex reads the diff and
writes CodexReview.md with findings tagged High, Medium, and Low, plus exact
file:line references. Claude Code applies the fixes and re-stages; the loop runs until nothing
is left to flag.
The shared AGENTS.md bans roughly 40 AI-tell words (including delve,
pivotal, underscore, paramount, groundbreaking,
trailblazing, and unprecedented). It also blocks em-dashes as casual punctuation,
preserves the input format (LaTeX stays LaTeX, prose stays prose), and stops the agent from appending a tidy
summary sentence to every paragraph.
Without this config (default AI voice):
We delve into a pivotal realm, a multifaceted endeavor that underscores a paramount facet of outlier detection, paving the way for groundbreaking advances that will reimagine the trailblazing work of our predecessors and, in so doing, garner unprecedented attention in this burgeoning field.
One sentence. 43 words. Multiple banned AI-tell terms. No structure; each clause adds more filler.
With this config:
We examine outlier detection along three dimensions: coverage, interpretability, and scale. Each matters; none alone is sufficient. Prior work has addressed one or two of these in isolation; this work integrates all three.
Three sentences. 33 words. Zero banned words. Semicolons and colons in place of dashes. One idea per sentence; the last sentence actually says something about the contribution.
Every destructive Git or GitHub command (push --force, reset --hard,
gh pr merge, git branch -D, git rebase, and similar) passes through
a PreToolUse hook (guard.py) that refuses to proceed silently. Read-only operations
(status, diff, log) stay fast.
Example guard prompt (illustrative; the actual hook formats decision reasons at runtime):
[guard.py] STOP. Destructive command detected.
command: git push --force origin main
category: destructive push
This is destructive. Are you sure? (y/N)
Shell deletes (rm -rf) are gated separately through Claude Code's built-in permission prompts
configured in user/settings.json.
Most Claude Code and Codex users never touch effort level, Codex MCP config, GitHub Actions version pins, or banned AI-tell vocabulary, and run suboptimal defaults without knowing. This project ships the recommended default stack in one install:
CLAUDE_CODE_EFFORT_LEVEL=max persisted across every session, merged into
~/.claude/settings.json by bootstrap.config.toml values (model = "gpt-5.4",
model_reasoning_effort = "xhigh", service_tier = "fast"), verified by the
session-start check.guard.py PreToolUse hook deployed to ~/.claude/hooks/.session_bootstrap.py SessionStart hook that runs bootstrap at the start of every session.
Updates are automatic for Claude Code through the SessionStart hook: every new session runs bootstrap
before the first turn, so the shared AGENTS.md, skills, and settings stay current. For Codex
or other agents without a SessionStart hook, tell the agent in the first message of a session:
"read @AGENTS.md to run bootstrap, session checks, and task routing." To force a refresh
mid-session, run bash .agent-config/bootstrap.sh (or .ps1 on Windows).
Everything is plain Markdown, Python, and JSON. Fork the repository, swap in your own skills or writing
rules, and keep pulling upstream updates with git merge upstream/main. Groups and teams can
publish their own shared configuration the same way.