| Crates.io | praxio |
| lib.rs | praxio |
| version | 0.2.0 |
| created_at | 2025-10-24 15:17:35.417092+00 |
| updated_at | 2025-10-24 23:45:29.755895+00 |
| description | MCP server for LLM delegation - enables AI agents to delegate tasks to specialist models without context pollution |
| homepage | https://github.com/epistates/praxio |
| repository | https://github.com/epistates/praxio |
| max_upload_size | |
| id | 1898589 |
| size | 115,551 |
Save tokens, cut costs, and keep your AI focused on what matters. Praxio is a smart delegation layer for AI workflows that lets any AI agent delegate specialized tasks to other models—using Claude, Gemini, or any combination.
Imagine you're working with an AI in your editor (Claude, Gemini, or any MCP-compatible agent). You give it a complex task like "refactor this authentication module." Your AI needs to understand the existing code, check for security issues, and plan the refactoring.
Without Praxio: Your AI burns through your token budget reading all the existing code into your shared conversation context, reducing how much space you have for actual work.
With Praxio: Your AI delegates specialized tasks to other models—using Claude's speed, Gemini's huge context window, or both—gets back concise summaries, and you keep your main conversation clean and focused on what matters.
Praxio works with multiple LLM providers:
Choose the best model for each task, not just one model for everything.
Your AI agent can intelligently delegate:
With Cargo (Recommended):
cargo install praxio
This downloads and builds Praxio from crates.io, making it available as praxio in your PATH.
From Source:
# Clone and build
git clone https://github.com/epistates/praxio.git
cd praxio
cargo build --release
# The binary is at ./target/release/praxio
Add to your configuration (usually ~/.config/claude/claude.json or via UI):
{
"mcpServers": {
"praxio": {
"command": "praxio"
}
}
}
Or if building from source:
{
"mcpServers": {
"praxio": {
"command": "/path/to/target/release/praxio"
}
}
}
Restart your client, and you'll see tools available:
Once installed, you can ask your AI agent naturally:
You: "I need to understand the codebase structure.
Can you delegate analyzing the database schema and API endpoints?"
Your AI will automatically use delegation to handle this in parallel
and synthesize the results for you.
Or be explicit about which provider to use:
"Can you use Gemini to analyze all files in this directory?
It has a huge context window."
"Use Claude to quickly review this code for bugs."
# Optional - Claude support
export CLAUDE_CLI_PATH="/path/to/claude" # Usually auto-detected
# Optional - Gemini support
export GEMINI_API_KEY="your-api-key"
# Optional - Debug logging
export RUST_LOG=info # Show what's happening
export RUST_LOG=debug # Very detailed logs
Each provider has sensible defaults, configurable per delegation:
"Delegate this to Claude with a 60-second timeout"
"Use Gemini with 90 seconds since it searches large contexts"
Default timeouts:
Keep context across delegations:
You: "Start a new analysis session"
Your agent creates a session and returns a session_id
Later: "Continue the analysis session abc123"
Previous context is maintained across providers
Make sure Claude CLI is installed and in your PATH:
which claude # Should show the path
claude --version # Should show version number
Set your Gemini API key:
export GEMINI_API_KEY="your-api-key"
Run claude setup-token and follow the authentication flow.
Praxio checks provider availability on startup:
Q: Which provider should I use? A: Both! Claude is fast and great for coordination. Gemini has 1M context perfect for large file analysis. Use them together for best results.
Q: Does Praxio send my code to extra services? A: Code only goes to the providers (Claude/Gemini APIs) you explicitly choose. Praxio itself runs locally on your machine.
Q: What if I only have Claude? A: That's fine. Gemini support is optional - use Praxio with just Claude. Later, add Gemini when you need its 1M context.
Q: What if I only have Gemini? A: Also fine. Use just Gemini's huge context window for large-scale analysis.
Q: Can I use this without Claude Code? A: Yes, Praxio works with any MCP-compatible client (Cursor, continue.dev, etc).
Q: Does Praxio collect analytics? A: No. Praxio is open source and completely local. No telemetry, no analytics, no tracking.
Q: How does delegation help my workflow? A: Delegation keeps your main AI focused on your task, not drowning in background research. You get faster responses, cleaner context, and can choose the best tool for each job.
Q: Can I use this with local models? A: Not yet, but it's planned for Phase 4.
Q: Can I add my own provider? A: Yes! Praxio is extensible. See Contributing section.
This project is open source under MIT license. We welcome contributions:
MIT License - Use freely for any purpose