The terminal AI agent ▋
Halcón connects your terminal to Claude, GPT-4o, DeepSeek, Gemini, or any local Ollama model — with 21 built-in tools, layered permissions, and a native MCP server for your IDE.
SHA-256 verified · no sudo · installs to ~/.local/bin
Works with every major AI provider
Switch providers and models with a single flag. No reconfiguration, no friction.
Models
Models
Models
Models
Models
Switch with one flag
halcon -p openai chat
halcon -p ollama -m llama3 chat
Everything you need to automate your workflow
Production-grade with 2,200+ test coverage. Multi-model. Zero native dependencies.
21 built-in tools
File read/write/edit, bash execution, git operations, web search, regex grep, code symbol extraction, HTTP requests, and background jobs — all wired into the agent loop.
Permission-first by design
Every tool is classified: ReadOnly tools execute silently, ReadWrite and Destructive tools require your explicit confirmation. The agent never runs rm or git commit without you.
TUI Cockpit
A 3-zone terminal UI with live token counters, agent FSM state, plan progress, side panel metrics, and a real-time activity stream. Pause, step, or cancel the agent mid-execution.
Episodic memory
The agent remembers decisions, file paths, and learnings across sessions using BM25 semantic search, temporal decay scoring, and automatic consolidation. No setup required.
MCP Server built in
Run `halcon mcp-server` to expose all 21 tools via JSON-RPC over stdio. Wire it into VS Code, Cursor, or any IDE that supports the Model Context Protocol.
Multi-provider routing
Automatic fallback between providers, latency-aware model selection, and cost tracking per session. Set `balanced`, `fast`, or `cheap` routing strategies in config.
Full situational
awareness.
The 3-zone TUI gives you live token counters, real-time agent FSM state, execution plan progress, side panel metrics, and a scrollable activity stream — all updating as the agent works.
- ✓ Pause, step, or cancel the agent mid-execution
- ✓ Live token cost tracked per round
- ✓ Tool execution timeline with durations
- ✓ Provider fallback indicators
- ✓ Context compression events in real time
halcon chat --tui
Plan
Tokens
Up and running in 60 seconds
Install Halcón CLI
Installs to ~/.local/bin/halcon. SHA-256 verified.
Configure your API key
Stores your key securely in the OS keychain.
Start coding with AI
Or launch the full TUI cockpit with --tui
Supports Anthropic, OpenAI, DeepSeek, Gemini, and local Ollama.