| Crates.io | hanzo-zap |
| lib.rs | hanzo-zap |
| version | 0.6.0 |
| created_at | 2026-01-25 20:34:42.160326+00 |
| updated_at | 2026-01-25 20:34:42.160326+00 |
| description | ZAP (Zero-copy Agent Protocol) - 1000x faster than MCP for AI agent tool calls |
| homepage | https://github.com/hanzoai/dev/tree/main/zap |
| repository | https://github.com/hanzoai/dev |
| max_upload_size | |
| id | 2069492 |
| size | 481,739 |
Zero-copy Agent Protocol
Cap'n Proto for AI agents.
ZAP is an insanely fast binary protocol for AI agent tool calls. It's what you get when you take the zero-copy philosophy of Cap'n Proto and apply it to the specific problem of AI agents calling tools.
You know how MCP (Model Context Protocol) uses JSON-RPC? That means every time your agent reads a file, searches code, or runs a command, the response gets:
For a single tool call, who cares? But AI agents make dozens of tool calls per task. A simple bug fix might involve 6-10 file reads, a few searches, some edits, and test runs. That overhead adds up.
ZAP eliminates steps 1, 2, 4, and 5.
Messages are read directly from network buffers with zero parsing and zero copying. The result? 1,000-2,000x faster than JSON-RPC for typical agent workloads.
Here's a real benchmark comparing ZAP to MCP for a typical coding task (6 tool calls, ~63KB of payload):
| Metric | JSON-RPC (MCP) | ZAP | Improvement |
|---|---|---|---|
| Parse time | 4.6ms | 3.1μs | 1,480x faster |
| Allocations | 802 | 0 | ∞x better |
| Wire size | 103KB | 63KB | 39% smaller |
| Memory pressure | High | None | No GC pauses |
And here's the kicker: ZAP is fully compatible with MCP. The ZAP Gateway bridges to existing MCP servers, so you can migrate incrementally.
ZAP uses a dead-simple wire format:
┌──────────┬──────────┬──────────────────────┐
│ Length │ MsgType │ Payload │
│ (4 bytes)│ (1 byte) │ (variable) │
│ LE u32 │ │ │
└──────────┴──────────┴──────────────────────┘
That's it. No framing complexity, no HTTP overhead, no protobuf schema compilation. Just length-prefixed messages that can be read directly from the socket.
The payload uses JSON structure you're already familiar with—but transmitted as raw bytes, not base64-encoded strings. When you read a file, you get the actual file bytes, not a 33% inflated base64 blob that you then have to decode.
The real magic is in how ZAP handles responses. Traditional protocols work like this:
Network Buffer → Parse JSON → Allocate Objects → Copy Data → Your Code
ZAP works like this:
Network Buffer → Your Code
When you call result.content, you're reading directly from the network buffer. No intermediate copies. No allocations. No garbage collection.
This isn't just faster—it's fundamentally different. Your agent can process gigabytes of file contents without proportional memory growth.
User: "Fix the authentication bug in login.rs"
Agent:
1. read_file("/src/auth/login.rs") → 15KB
2. grep("authentication", "/src/**/*.rs") → 8KB
3. read_file("/src/auth/session.rs") → 12KB
4. read_file("/src/middleware/auth.rs") → 9KB
5. edit_file(login.rs, patch) → 2KB
6. run_command("cargo test auth") → 25KB
7. edit_file(login.rs, fix) → 2KB
8. run_command("cargo test auth") → 3KB
9. git_commit("Fix auth bug") → 1KB
Total: 77KB payload
| Protocol | Parse Time | Allocations | Wire Size |
|---|---|---|---|
| JSON-RPC | 4.6 ms | 802 | 103 KB |
| ZAP | 3.1 μs | 0 | 77 KB |
Savings: 1,480x faster parsing, 100% fewer allocations
Files read: 25 files × 8KB = 200KB
Searches: 10 greps × 5KB = 50KB
Suggestions: 15 edits × 2KB = 30KB
Total: 280KB
| Protocol | Parse Time | Memory |
|---|---|---|
| JSON-RPC | 12.5 ms | 373 KB allocs |
| ZAP | 11 μs | 0 |
| Protocol | Total Parse | Total Allocs | GC Pauses |
|---|---|---|---|
| JSON-RPC | 2.3 seconds | 4M | ~50 |
| ZAP | 1.5 ms | 0 | 0 |
[dependencies]
hanzo-zap = "0.1"
use hanzo_zap::Client;
use serde_json::json;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let client = Client::connect("zap://localhost:9999").await?;
let result = client.call_tool("read_file", json!({
"path": "src/main.rs"
})).await?;
println!("{}", result.content);
Ok(())
}
pip install hanzo-zap
from hanzo_zap import ZapClient
async with ZapClient("zap://localhost:9999") as client:
result = await client.call_tool("read_file", {"path": "src/main.rs"})
print(result.content)
npm install @hanzo/zap
import { ZapClient } from '@hanzo/zap';
const client = await ZapClient.connect('zap://localhost:9999');
const result = await client.callTool('read_file', { path: 'src/main.rs' });
console.log(result.content);
go get github.com/hanzoai/zap-go
client, _ := zap.Connect("zap://localhost:9999")
result, _ := client.CallTool("read_file", map[string]any{"path": "src/main.rs"})
fmt.Println(result.Content)
Don't want to rewrite your MCP servers? No problem. The ZAP Gateway bridges between protocols:
┌─────────────┐ ZAP ┌─────────────┐ JSON-RPC ┌─────────────┐
│ Agent │◄──────────────►│ Gateway │◄───────────────►│ MCP Server │
│ (fast) │ (binary) │ (aggregator)│ (stdio) │ (existing) │
└─────────────┘ └─────────────┘ └─────────────┘
Configure your MCP servers:
# gateway.toml
[[mcp_servers]]
name = "filesystem"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
[[mcp_servers]]
name = "github"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
env = { GITHUB_TOKEN = "${GITHUB_TOKEN}" }
Benchmark: ZAP gateway adds <5% overhead while aggregating 20+ MCP servers. The bottleneck is always the MCP servers themselves.
ZAP enforces security at the protocol level with two orthogonal policies:
When should the agent pause and ask a human?
| Policy | Behavior |
|---|---|
never |
Full autonomy (CI/CD pipelines) |
on-failure |
Ask only when operations fail |
on-request |
Model decides based on risk |
unless-trusted |
Ask for everything except known-safe reads |
What operations are physically allowed?
| Policy | Filesystem | Network | Processes |
|---|---|---|---|
danger-full-access |
Full | Full | Full |
workspace-write |
Read all, write workspace | Configurable | Allowed |
read-only |
Read only | Blocked | Limited |
These policies are enforced at the protocol level. A rogue agent can't bypass sandbox restrictions.
ZAP defines 14 tool categories with 167 typed operations:
| Category | Examples |
|---|---|
| Filesystem | read_file, write_file, edit_file, glob, grep |
| Computer | exec, list_processes, kill_process |
| VCS | git_status, git_diff, git_commit, git_log |
| Build | build, test, lint, typecheck |
| Network | http_request, fetch_url, port_check |
| Browser | navigate, click, type, screenshot |
| LSP | completion, definition, references, rename |
| Debug | breakpoint, step, inspect, profile |
| Container | docker_run, k8s_apply, vm_create |
| Cloud | deploy, secrets, dns |
| Data | query, migrate, backup |
| Security | scan, sign, verify |
| Vision | ocr, detect_ui, describe_screen |
| Plan | plan_intent, plan_route, audit_log |
| Language | Package | Status |
|---|---|---|
| Rust | hanzo-zap |
✅ Native |
| Python | hanzo-zap |
✅ Complete |
| Node.js | @hanzo/zap |
✅ Complete |
| Go | github.com/hanzoai/zap-go |
✅ Complete |
| Ruby | hanzo-zap |
✅ Complete |
| Elixir | hanzo_zap |
✅ Complete |
| Haskell | hanzo-zap |
✅ Complete |
| OCaml | hanzo-zap |
✅ Complete |
| C/C++ | libzap |
🚧 FFI |
| Scheme | Description | Default Port |
|---|---|---|
zap:// |
Plain TCP | 9999 |
zaps:// |
TLS 1.3 | 9999 |
zap+unix:// |
Unix socket | N/A |
Measured on Apple M3 Max:
| Size | JSON-RPC | ZAP | Speedup |
|---|---|---|---|
| 1 KB | 45 μs | 0.2 μs | 225x |
| 10 KB | 420 μs | 0.4 μs | 1,050x |
| 100 KB | 4.2 ms | 0.8 μs | 5,250x |
| 1 MB | 42 ms | 4 μs | 10,500x |
| Protocol | p50 | p99 | Throughput |
|---|---|---|---|
| MCP (HTTP) | 1.2 ms | 4.5 ms | 800/s |
| ZAP (TCP) | 0.08 ms | 0.15 ms | 12,000/s |
| ZAP (Unix) | 0.04 ms | 0.09 ms | 25,000/s |
MCP is great for interoperability. But it's slow. ZAP gives you 1000x+ performance while maintaining full MCP compatibility via the gateway.
gRPC requires protobuf schemas, code generation, and HTTP/2. ZAP is simpler: just connect and call tools. No schema compilation, no HTTP overhead.
Cap'n Proto inspired ZAP's zero-copy design. But Cap'n Proto is a general-purpose serialization format. ZAP is purpose-built for AI agents, with built-in tool schemas, security policies, and MCP compatibility.
MIT License - Copyright 2025 Hanzo AI Inc.
ZAP: Because your AI agent shouldn't wait for JSON parsing.