Run this helper free — no credit card
Every helper is free for 30 days. Answer 3 questions and get the full result in 2 minutes.
Start free →ExaAiAgent Skill
Docker-powered multi-agent offensive security testing for every LiteLLM provider
❌ Security teams struggle to coordinate multi-agent offensive testing across diverse code repositories and attack surfaces without unified AI orchestration.
✅ Users deploy containerized AI-driven penetration testing and security scanning with any LiteLLM provider in minutes.
- ✓AI-assisted penetration testing and attack surface mapping
- ✓Unified security scanning across repos and code with LiteLLM
- ✓CLI and TUI interfaces for scan launch and debugging
- ✓Multi-agent offensive workflows with provider flexibility
👁 2 views · 📦 0 installs
Install in one line
CLI$ mfkvault install hleliofficiel-exaaiagentRequires the MFKVault CLI. Prefer MCP?
Free to install — no account needed
Copy the command below and paste into your agent.
Instant access • No coding needed • No account needed
What you get in 5 minutes
- Full skill code ready to install
- Works with 2 AI agents
- Lifetime updates included
Run this helper
Answer a few questions and let this helper do the work.
▸Advanced: use with your AI agent
Description
--- name: exaaiagent description: Run, debug, maintain, or extend ExaAiAgent for AI-assisted penetration testing, attack-surface mapping, repo/code security review, and multi-agent offensive-security workflows. Use when an AI agent needs onboarding instructions for operating ExaAiAgent, when a user wants to launch scans from CLI/TUI, when ExaAiAgent itself needs maintenance, or when another agent should use ExaAiAgent with any LiteLLM-supported provider (OpenAI, Anthropic, OpenRouter, Ollama, Gemini-compatible endpoints, and other LiteLLM-backed providers). --- # ExaAiAgent Skill Use ExaAiAgent as a Docker-backed security testing framework powered by **LiteLLM-compatible providers**. ## Core operating rules - Require Docker. If Docker is unavailable, runtime startup fails before scanning begins. - Require a LiteLLM-supported model provider. - Treat `EXAAI_LLM` as the active model selector. - Use `LLM_API_KEY` and `LLM_API_BASE` only when the chosen provider needs them. - Expect the first run to pull the sandbox Docker image automatically. - Save results under `exaai_runs/<run-name>`. - Use only on assets the operator is authorized to test. ## Installation and first scan Install ExaAiAgent with either method: ```bash # Method 1: pip pip install exaai-agent # Method 2: pipx pipx install exaai-agent ``` Configure a LiteLLM-supported provider. ExaAiAgent is **not limited to OpenRouter**; use any provider LiteLLM supports. ### OpenAI ```bash export EXAAI_LLM="openai/gpt-5" export LLM_API_KEY="your-openai-key" ``` ### Anthropic ```bash export EXAAI_LLM="anthropic/claude-sonnet-4-5" export LLM_API_KEY="your-anthropic-key" ``` ### OpenRouter ```bash export EXAAI_LLM="openrouter/auto" export LLM_API_KEY="your-openrouter-key" export LLM_API_BASE="https://openrouter.ai/api/v1" ``` ### Ollama ```bash export EXAAI_LLM="ollama/llama3" export LLM_API_BASE="http://localhost:11434" ``` ### Any other LiteLLM-backed provider ```bash export EXAAI_LLM="provider/model-name" export LLM_API_KEY="provider-key-if-needed" export LLM_API_BASE="provider-base-url-if-needed" ``` Run the first scan: ```bash exaai --target https://your-app.com ``` ## Basic usage ### Local codebase ```bash exaai --target ./app-directory ``` ### GitHub repository review ```bash exaai --target https://github.com/org/repo ``` ### Black-box web assessment ```bash exaai --target https://your-app.com ``` ### Headless mode ```bash exaai -n --target https://your-app.com ``` ### Interactive mode ```bash exaai tui ``` ## Smart auto-loading examples ExaAiAgent can auto-resolve prompt modules when the user does not explicitly set `--prompt-modules`. ```bash # GraphQL target exaai --target https://api.example.com/graphql # WebSocket target exaai --target wss://chat.example.com/socket # OAuth/OIDC target exaai --target https://auth.example.com/oauth/authorize # Recon-focused domain testing exaai --target example.com --instruction "enumerate subdomains" ``` ## Advanced usage examples ### Authenticated or grey-box testing ```bash exaai --target https://your-app.com --instruction "Perform authenticated testing using provided credentials and identify authorization flaws" ``` ### Multi-target testing ```bash exaai -t https://github.com/org/app -t https://your-app.com ``` ### Explicit modules ```bash exaai --target https://api.example.com --prompt-modules graphql_security,waf_bypass ``` ### Lightweight mode ```bash export EXAAI_LIGHTWEIGHT_MODE=true exaai --target https://example.com --instruction "quick security scan" ``` ## Runtime expectations - Docker is mandatory for sandbox execution. - Tool execution is routed through the sandbox tool server. - If Docker is unavailable, ExaAiAgent can fail before agent/tool execution begins. - Prompt modules auto-resolve unless the operator overrides them with `--prompt-modules`. ## Diagnose common failures ### Docker failures Check: ```bash docker version docker info ``` If Docker is unavailable, fix Docker before debugging LiteLLM, agents, or tool-server behavior. ### Provider or LiteLLM failures Check: - `EXAAI_LLM` - `LLM_API_KEY` - `LLM_API_BASE` when applicable - provider/model compatibility with LiteLLM ### Tool/runtime failures If startup succeeds but scan execution fails: - inspect sandbox startup - inspect tool-server health - inspect missing system dependencies required by the selected tools - inspect model/provider rate limits or request failures ## Maintain ExaAiAgent itself When editing ExaAiAgent: 1. Fix runtime, CLI, TUI, and tool-server issues before adding new features. 2. Keep version strings synchronized in: - `pyproject.toml` - `exaaiagnt/interface/main.py` - `exaaiagnt/interface/tui.py` - `README.md` 3. Keep LiteLLM as the model-provider abstraction layer. 4. Prefer stronger error surfacing over silent failure. 5. Validate CI before release. Useful checks: ```bash pytest -q python -m py_compile exaaiagnt/interface/main.py exaaiagnt/interface/tui.py exaaiagnt/runtime/tool_server.py exaai --version ``` ## Release checklist Before release: - confirm tests pass - confirm CI is green - confirm version strings are aligned - confirm README and SKILL.md are updated - confirm Docker requirement is documented clearly - confirm at least one real startup path was exercised ## Safety note Only run ExaAiAgent on assets the operator is explicitly authorized to test.
Security Status
Unvetted
Not yet security scanned
Related AI Tools
More Career Boost tools you might like
ru-text — Russian Text Quality
FreeApplies professional Russian typography, grammar, and style rules to improve text quality across content types
Run free/forge:工作流总入口
Free'Forge 工作流总入口。检查项目状态,推荐下一步该用哪个 skill。任何时候不知道下一步该干什么,就用 /forge。触发方式:用户说"forge"、"下一步"、"接下来做什么"、"继续"(在没有明确上下文时)。'
Run freeCharles Proxy Session Extractor
FreeExtracts HTTP/HTTPS request and response data from Charles Proxy session files (.chlsj format), including URLs, methods, status codes, headers, request bodies, and response bodies. Use when analyzing captured network traffic from Charles Proxy debug
Run freeJava Backend Interview Simulator
FreeSimulates realistic Java backend technical interviews with customizable interviewer styles and candidate levels for Chinese tech companies
Run freeTypeScript React & Next.js Production Patterns
FreeProduction-grade TypeScript reference for React & Next.js covering type safety, component patterns, API validation, state management, and debugging
Run freeAI News & Trends Intelligence
FreeFetches latest AI/ML news, trending open-source projects, and social media discussions from 75+ curated sources for comprehensive AI briefings
Run free