| Crates.io | agent-primitives |
| lib.rs | agent-primitives |
| version | 0.3.2 |
| created_at | 2025-11-03 17:50:53.747549+00 |
| updated_at | 2025-12-21 19:42:51.53405+00 |
| description | Core primitives for MXP agent runtime: IDs, capabilities, manifests, and errors |
| homepage | https://mxpnexus.com |
| repository | https://github.com/yafatek/mxpnexus |
| max_upload_size | |
| id | 1915064 |
| size | 63,534 |
Production-grade Rust SDK for building autonomous AI agents that communicate over the MXP protocol.
Part of the MXP (Mesh eXchange Protocol) ecosystem, this SDK provides the runtime infrastructure for building, deploying, and operating AI agents that speak MXP natively. While the mxp crate handles wire protocol encoding/decoding and secure UDP transport, this SDK provides:
Install via the bundled facade crate:
cargo add mxp-agents
Basic LLM Usage
use mxp_agents::adapters::ollama::{OllamaAdapter, OllamaConfig};
use mxp_agents::adapters::traits::{InferenceRequest, MessageRole, ModelAdapter, PromptMessage};
use futures::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create an adapter (works with OpenAI, Anthropic, Gemini, or Ollama)
// Use .with_stream(true) for incremental token streaming
let adapter = OllamaAdapter::new(
OllamaConfig::new("gemma2:2b")
.with_stream(true) // Enable streaming responses
)?;
// Build a request with system prompt
let request = InferenceRequest::new(vec![
PromptMessage::new(MessageRole::User, "What is MXP?"),
])?
.with_system_prompt("You are an expert on MXP protocol")
.with_temperature(0.7);
// Get streaming response
let mut stream = adapter.infer(request).await?;
// Process chunks as they arrive
while let Some(chunk) = stream.next().await {
let chunk = chunk?;
print!("{}", chunk.delta);
}
Ok(())
}
MXP Agent Setup
Agents communicate over the MXP protocol. Here's how to create an agent that handles MXP messages:
use mxp_agents::kernel::{
AgentKernel, AgentMessageHandler, HandlerContext, HandlerResult,
TaskScheduler, LifecycleEvent,
};
use mxp_agents::primitives::{AgentId, AgentManifest, Capability, CapabilityId};
use async_trait::async_trait;
use std::sync::Arc;
// Define your agent's message handler
struct MyAgentHandler;
#[async_trait]
impl AgentMessageHandler for MyAgentHandler {
async fn handle_call(&self, ctx: HandlerContext) -> HandlerResult {
// Process incoming MXP Call messages
let message = ctx.message();
println!("Received MXP call with {} bytes", message.payload().len());
Ok(())
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let agent_id = AgentId::random();
let handler = Arc::new(MyAgentHandler);
let scheduler = TaskScheduler::default();
// Create the agent kernel
let mut kernel = AgentKernel::new(agent_id, handler, scheduler);
// Boot and activate the agent
kernel.transition(LifecycleEvent::Boot)?;
kernel.transition(LifecycleEvent::Activate)?;
println!("Agent {} is active and ready for MXP messages", agent_id);
Ok(())
}
Production Setup with Resilience & Observability
use mxp_agents::adapters::ollama::{OllamaAdapter, OllamaConfig};
use mxp_agents::adapters::resilience::{
CircuitBreakerConfig, RetryConfig, BackoffStrategy, ResilientAdapter,
};
use mxp_agents::telemetry::PrometheusExporter;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create resilient adapter with circuit breaker and retry
let base_adapter = OllamaAdapter::new(OllamaConfig::new("gemma2:2b"))?;
let resilient = ResilientAdapter::builder(base_adapter)
.with_circuit_breaker(CircuitBreakerConfig {
failure_threshold: 5,
cooldown: Duration::from_secs(30),
success_threshold: 2,
})
.with_retry(RetryConfig {
max_attempts: 3,
backoff: BackoffStrategy::Exponential {
base: Duration::from_millis(100),
max: Duration::from_secs(10),
jitter: true,
},
..Default::default()
})
.with_timeout_duration(Duration::from_secs(30))
.build();
// Set up metrics collection
let exporter = PrometheusExporter::new();
let _ = exporter.register_runtime();
let _ = exporter.register_adapter("ollama");
// Export Prometheus metrics
println!("{}", exporter.export());
Ok(())
}
See examples/ for more complete examples including policy enforcement, memory integration, and graceful shutdown.
The SDK is production-hardened with features required for mission-critical deployments:
Resilience & Reliability
Observability & Monitoring
Security & Compliance
Operations & Configuration
Core Runtime
Enterprise Features
Out of scope: MXP Nexus deployment tooling, mesh scheduling, or any "deep agents" research-oriented SDK—handled by separate projects.
ModelAdapter trait.The SDK is designed for production deployments with:
All code passes cargo fmt, cargo clippy --all-targets --all-features, and cargo test --all-features gates.
This SDK is part of the MXP protocol ecosystem. The mxp crate provides the transport primitives, while this SDK provides the agent runtime that speaks MXP natively.
Protocol Relationship
mxp crate: Wire protocol, message encoding/decoding, UDP transport with ChaCha20-Poly1305 encryptionmxp-agents crate: Agent runtime, lifecycle management, LLM adapters, tools, policy enforcementMXP Message Types
Agents handle these MXP message types through the AgentMessageHandler trait:
AgentRegister / AgentHeartbeat — Mesh registration and healthCall / Response — Request-response communicationEvent — Fire-and-forget notificationsStreamOpen / StreamChunk / StreamClose — Streaming dataRegistry Integration Example
use mxp_agents::kernel::{
AgentKernel, MxpRegistryClient, RegistrationConfig, TaskScheduler,
};
use mxp_agents::primitives::{AgentId, AgentManifest, Capability, CapabilityId};
use std::net::SocketAddr;
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let agent_id = AgentId::random();
// Define agent capabilities
let capability = Capability::builder(CapabilityId::new("chat.respond")?)
.name("Chat Response")?
.version("1.0.0")?
.add_scope("chat:write")?
.build()?;
// Create agent manifest
let manifest = AgentManifest::builder(agent_id)
.name("my-chat-agent")?
.version("0.1.0")?
.capabilities(vec![capability])
.build()?;
// Connect to MXP registry for mesh discovery
let agent_endpoint: SocketAddr = "127.0.0.1:50052".parse()?;
let registry = Arc::new(MxpRegistryClient::connect(
"127.0.0.1:50051", // Registry endpoint
agent_endpoint,
None,
)?);
// Create kernel with registry integration
let handler = Arc::new(MyAgentHandler);
let mut kernel = AgentKernel::new(agent_id, handler, TaskScheduler::default());
kernel.set_registry(registry, manifest, RegistrationConfig::default());
// Agent will auto-register and send heartbeats
kernel.transition(LifecycleEvent::Boot)?;
kernel.transition(LifecycleEvent::Activate)?;
Ok(())
}
#[tool]; the SDK converts them into schemas consumable by LLMs and enforces capability scopes at runtime.All adapters support system prompts with provider-native optimizations:
use mxp_agents::adapters::openai::{OpenAiAdapter, OpenAiConfig};
use mxp_agents::adapters::anthropic::{AnthropicAdapter, AnthropicConfig};
use mxp_agents::adapters::gemini::{GeminiAdapter, GeminiConfig};
use mxp_agents::adapters::traits::InferenceRequest;
// OpenAI/Ollama: Prepends as first message
let openai = OpenAiAdapter::new(OpenAiConfig::from_env("gpt-4"))?;
// Anthropic: Uses dedicated 'system' parameter
let anthropic = AnthropicAdapter::new(AnthropicConfig::from_env("claude-3-5-sonnet-20241022"))?;
// Gemini: Uses 'systemInstruction' field
let gemini = GeminiAdapter::new(GeminiConfig::from_env("gemini-1.5-pro"))?;
// Same API works across all providers
let request = InferenceRequest::new(messages)?
.with_system_prompt("You are a helpful assistant");
For long conversations, enable automatic context management:
use mxp_agents::prompts::ContextWindowConfig;
use mxp_agents::adapters::ollama::{OllamaAdapter, OllamaConfig};
let adapter = OllamaAdapter::new(OllamaConfig::new("gemma2:2b"))?
.with_context_config(ContextWindowConfig {
max_tokens: 4096,
recent_window_size: 10,
..Default::default()
});
// SDK automatically manages conversation history within token budget
docs/overview.md — architectural overview and design principlesdocs/architecture.md — crate layout, component contracts, roadmapdocs/features.md — complete feature set and facade feature flagsdocs/usage.md — end-to-end setup guide for building agentsdocs/enterprise.md — production hardening guide with resilience, observability, and securitydocs/errors.md — error surfaces and troubleshooting tipsexamples/basic-agent — simple agent with Ollama adapter and policy enforcementexamples/enterprise-agent — production-grade agent demonstrating resilience, metrics, health checks, and graceful shutdownDevelopment: Start with examples/basic-agent to understand core concepts
Production: Review docs/enterprise.md and examples/enterprise-agent for hardening patterns
Integration: Wire MXP endpoints for discovery and message handling
Deployment: Use health checks and metrics for Kubernetes integration
If the circuit breaker is opening too often:
failure_threshold in CircuitBreakerConfigIf memory usage is growing:
If inference is slower than expected:
request_latency_seconds metricsSee docs/enterprise.md for comprehensive troubleshooting guide.
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
# Build all crates
cargo build --all-features
# Run tests
cargo test --all-features
# Run linting
cargo clippy --all-targets --all-features -- -D warnings
# Format code
cargo fmt --check
Licensed under either of:
at your option.