Any LLM
Connect any LLM. Switch per-session. No vendor lock-in. Budget models for routine tasks, premium models for hard problems.
A coding assistant, a research agent, a personal automation — same binary, different configuration. Your code. Your keys. Your agent.
Tools require Node.js or Bun. Installed automatically if needed.
No baked-in behavior. No hidden prompts. No fixed tools. Install with one command, connect your LLMs, and the agent runs on your server.
Connect any LLM. Switch per-session. No vendor lock-in. Budget models for routine tasks, premium models for hard problems.
Write tools in any language. Drop them in. The agent picks them up. A tool is just a script that speaks JSON on stdin/stdout — Rust, Python, Bash, Go, anything.
Connect any MCP server for external tooling, and define Skills as discoverable instruction sets. Both are automatically available to your agent.
Agents can spawn subagents for complex tasks — isolated sessions that inherit your workspace context. Interrupts cascade automatically.
One persistent server on your machine. REST API and WebSocket under the hood. Desktop, mobile, or browser — all connect to the same agent. Works over Tailscale, VPN, or your local network.
Download preset bundles for coding, research, or ops. Or build from nothing — define the system prompt, pick the tools, choose the model. Your agent, your rules.
Connect Claude, GPT-4o, or Gemini to your codebase. Subagents explore, refactor, and implement with full workspace isolation.
Give your agent tools to query APIs, scrape pages, and process documents. Isolated workspaces keep research contexts separate. Sessions compact via LLM summarization to stay focused.
Connect MCP servers for Kubernetes, AWS, or Terraform. Tools in Bash or Python interact with your infrastructure. Subagent orchestration handles multi-step deployment pipelines.
Create agent personalities for repetitive tasks. Skills let agents discover and follow domain-specific instruction sets. Queue sessions for batch processing.
One command to install. One command to start. Open source under Apache 2.0. No baked-in behavior. No vendor lock-in.