Crates.io | turbomcp |
lib.rs | turbomcp |
version | 1.1.2 |
created_at | 2025-08-26 17:51:38.979367+00 |
updated_at | 2025-09-25 03:27:49.522938+00 |
description | Rust SDK for Model Context Protocol (MCP) with OAuth 2.1 compliance, ergonomic macros and SIMD acceleration |
homepage | https://turbomcp.org |
repository | https://github.com/Epistates/turbomcp |
max_upload_size | |
id | 1811605 |
size | 1,448,186 |
Rust SDK for the Model Context Protocol (MCP) with comprehensive specification support and performance optimizations.
turbomcp
is a Rust framework for implementing the Model Context Protocol. It provides tools, servers, clients, and transport layers with MCP specification compliance, security features, and performance optimizations.
cargo-deny
policy#[server]
, #[tool]
, #[resource]
, #[prompt]
All transport protocols provide MCP protocol compliance with bidirectional communication, automatic reconnection, and session management.
⚠️ STDIO Protocol Compliance: When using STDIO transport (the default), avoid any logging or output to stdout. The MCP protocol requires stdout to contain only JSON-RPC messages. Any other output will break client communication. Use stderr for debugging if needed.
TurboMCP is built as a layered architecture with clear separation of concerns:
┌─────────────────────────────────────────────────────────────┐
│ TurboMCP Framework │
│ Ergonomic APIs & Developer Experience │
├─────────────────────────────────────────────────────────────┤
│ Infrastructure Layer │
│ Server • Client • Transport • Protocol │
├─────────────────────────────────────────────────────────────┤
│ Foundation Layer │
│ Core Types • Messages • State │
└─────────────────────────────────────────────────────────────┘
Components:
Add TurboMCP to your Cargo.toml
:
[dependencies]
turbomcp = "1.1.0"
tokio = { version = "1.0", features = ["full"] }
Create a simple calculator server:
use turbomcp::prelude::*;
#[derive(Clone)]
struct Calculator;
#[server]
impl Calculator {
#[tool("Add two numbers")]
async fn add(&self, a: i32, b: i32) -> McpResult<i32> {
Ok(a + b)
}
#[tool("Get server status")]
async fn status(&self, ctx: Context) -> McpResult<String> {
ctx.info("Status requested").await?;
Ok("Server running".to_string())
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
Calculator.run_stdio().await?;
Ok(())
}
# Build and run
cargo run
# Test with TurboMCP CLI
cargo install turbomcp-cli
# For HTTP server
turbomcp-cli tools-list --url http://localhost:8080/mcp
# For STDIO server
turbomcp-cli tools-list --command "./target/debug/my-server"
TurboMCP provides compile-time validated capability builders that ensure correct configuration at build time:
use turbomcp_protocol::capabilities::builders::{ServerCapabilitiesBuilder, ClientCapabilitiesBuilder};
// Server capabilities with compile-time validation
let server_caps = ServerCapabilitiesBuilder::new()
.enable_tools() // Enable tools capability
.enable_prompts() // Enable prompts capability
.enable_resources() // Enable resources capability
.enable_tool_list_changed() // ✅ Only available when tools enabled
.enable_resources_subscribe() // ✅ Only available when resources enabled
.build();
// Usage in server macro
#[server(
name = "my-server",
version = "1.1.0",
capabilities = ServerCapabilities::builder()
.enable_tools()
.enable_tool_list_changed()
.build()
)]
impl MyServer {
// Implementation...
}
// Client capabilities with type safety
let client_caps = ClientCapabilitiesBuilder::new()
.enable_roots() // Enable filesystem roots
.enable_sampling() // Enable LLM sampling
.enable_elicitation() // Enable interactive forms
.enable_roots_list_changed() // ✅ Only available when roots enabled
.build();
// Convenience builders for common patterns
let full_server = ServerCapabilitiesBuilder::full_featured().build();
let minimal_server = ServerCapabilitiesBuilder::minimal().build();
let sampling_client = ClientCapabilitiesBuilder::sampling_focused().build();
Use the #[server]
macro to automatically implement the MCP server trait:
use turbomcp::prelude::*;
#[derive(Clone)]
struct MyServer {
database: Arc<Database>,
cache: Arc<Cache>,
}
#[server]
impl MyServer {
// Tools, resources, and prompts defined here
}
Transform functions into MCP tools with automatic parameter handling:
#[tool("Calculate expression")]
async fn calculate(
&self,
#[description("Mathematical expression")]
expression: String,
#[description("Precision for results")]
precision: Option<u32>,
ctx: Context
) -> McpResult<f64> {
let precision = precision.unwrap_or(2);
ctx.info(&format!("Calculating: {}", expression)).await?;
// Calculation logic
let result = evaluate_expression(&expression)?;
Ok(round_to_precision(result, precision))
}
Create URI template-based resource handlers:
#[resource("file://{path}")]
async fn read_file(
&self,
#[description("File path to read")]
path: String,
ctx: Context
) -> McpResult<String> {
ctx.info(&format!("Reading file: {}", path)).await?;
tokio::fs::read_to_string(&path).await
.map_err(|e| McpError::Resource(e.to_string()))
}
Generate dynamic prompts with parameter substitution:
#[prompt("code_review")]
async fn code_review_prompt(
&self,
#[description("Programming language")]
language: String,
#[description("Code to review")]
code: String,
ctx: Context
) -> McpResult<String> {
ctx.info(&format!("Generating {} code review", language)).await?;
Ok(format!(
"Please review the following {} code:\n\n```{}\n{}\n```",
language, language, code
))
}
#[server(
name = "filesystem-server",
version = "1.1.0",
root = "file:///workspace:Project Workspace",
root = "file:///tmp:Temporary Files"
)]
impl FileSystemServer {
#[tool("List files in directory")]
async fn list_files(&self, ctx: Context, path: String) -> McpResult<Vec<String>> {
ctx.info(&format!("Listing files in: {}", path)).await?;
// Operations are bounded by configured roots
Ok(vec!["file1.txt".to_string(), "file2.txt".to_string()])
}
}
use turbomcp::prelude::*;
use turbomcp_protocol::elicitation::ElicitationSchema;
#[tool("Configure application settings")]
async fn configure_app(&self, ctx: Context) -> McpResult<String> {
// Build elicitation schema with type safety
let schema = ElicitationSchema::new()
.add_string_property("theme", Some("Color theme preference"))
.add_boolean_property("notifications", Some("Enable push notifications"))
.add_required(["theme"]);
// Simple, elegant elicitation with type safety
let result = elicit("Configure your preferences")
.field("theme", text("UI theme preference")
.options(&["light", "dark"]))
.send(&ctx)
.await?;
// Process the structured response
if let Some(data) = result.content {
let theme = data.get("theme")
.and_then(|v| v.as_str())
.unwrap_or("default");
Ok(format!("Configured with {} theme", theme))
} else {
Err(McpError::Context("Configuration cancelled".to_string()))
}
}
use turbomcp::prelude::*;
#[tool("Get AI code review")]
async fn code_review(&self, ctx: Context, code: String) -> McpResult<String> {
// Log the request with user context
let user = ctx.user_id().unwrap_or("anonymous");
ctx.info(&format!("User {} requesting code review", user)).await?;
// Build sampling request with ergonomic JSON
let request = serde_json::json!({
"messages": [{
"role": "user",
"content": {
"type": "text",
"text": format!("Please review this code:\n\n{}", code)
}
}],
"maxTokens": 500,
"systemPrompt": "You are a senior code reviewer. Provide constructive feedback."
});
// Request LLM assistance through the client
match ctx.create_message(request).await {
Ok(response) => {
ctx.info("AI review completed successfully").await?;
Ok(format!("AI Review: {:?}", response))
}
Err(_) => {
// Graceful fallback if sampling unavailable
let issues = code.matches("TODO").count() + code.matches("FIXME").count();
Ok(format!("Static analysis: {} lines, {} issues found", code.lines().count(), issues))
}
}
}
#[completion("Complete file paths")]
async fn complete_file_path(&self, partial: String) -> McpResult<Vec<String>> {
let files = std::fs::read_dir(".")?
.filter_map(|e| e.ok())
.map(|e| e.file_name().to_string_lossy().to_string())
.filter(|name| name.starts_with(&partial))
.collect();
Ok(files)
}
#[template("users/{user_id}/posts/{post_id}")]
async fn get_user_post(&self, user_id: String, post_id: String) -> McpResult<Post> {
// RFC 6570 URI template with multiple parameters
let post = self.database.get_post(&user_id, &post_id).await?;
Ok(post)
}
#[ping("Health check")]
async fn health_check(&self, ctx: Context) -> McpResult<HealthStatus> {
let db_status = self.database.ping().await.is_ok();
let cache_status = self.cache.ping().await.is_ok();
Ok(HealthStatus {
healthy: db_status && cache_status,
database: db_status,
cache: cache_status,
timestamp: ctx.timestamp(),
})
}
The Context
parameter provides request correlation, authentication, and observability:
#[tool("Authenticated operation")]
async fn secure_operation(&self, ctx: Context, data: String) -> McpResult<String> {
// Authentication
let user = ctx.authenticated_user()?;
// Logging with correlation
ctx.info(&format!("Processing request for user: {}", user.id)).await?;
// Request metadata
let request_id = ctx.request_id();
let start_time = ctx.start_time();
// Processing...
let result = process_data(&data).await?;
// Performance tracking
ctx.record_metric("processing_time", start_time.elapsed()).await?;
Ok(result)
}
TurboMCP provides built-in OAuth 2.0 support:
use turbomcp::prelude::*;
use turbomcp::auth::*;
#[derive(Clone)]
struct SecureServer {
oauth_providers: Arc<RwLock<HashMap<String, OAuth2Provider>>>,
}
#[server]
impl SecureServer {
#[tool("Get user profile")]
async fn get_user_profile(&self, ctx: Context) -> McpResult<UserProfile> {
let user = ctx.authenticated_user()
.ok_or_else(|| McpError::Unauthorized("Authentication required".to_string()))?;
Ok(UserProfile {
id: user.id,
name: user.name,
email: user.email,
})
}
#[tool("Start OAuth flow")]
async fn start_oauth_flow(&self, provider: String) -> McpResult<String> {
let providers = self.oauth_providers.read().await;
let oauth_provider = providers.get(&provider)
.ok_or_else(|| McpError::InvalidInput(format!("Unknown provider: {}", provider)))?;
let auth_result = oauth_provider.start_authorization().await?;
Ok(format!("Visit: {}", auth_result.auth_url))
}
}
Configure comprehensive security features:
use turbomcp_transport::{AxumMcpExt, McpServerConfig};
let config = McpServerConfig::production()
.with_cors_origins(vec!["https://app.example.com".to_string()])
.with_custom_csp("default-src 'self'; connect-src 'self' wss:")
.with_rate_limit(120, 20) // 120 req/min, 20 burst
.with_jwt_auth("your-secret-key".to_string());
let app = Router::new()
.route("/api/status", get(status_handler))
.merge(Router::<()>::turbo_mcp_routes_for_merge(mcp_service, config));
Perfect for Claude Desktop and local development:
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
MyServer::new().run_stdio().await?;
Ok(())
}
For web applications and browser integration:
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
MyServer::new().run_http("0.0.0.0:8080").await?;
Ok(())
}
For real-time bidirectional communication:
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
MyServer::new().run_websocket("0.0.0.0:8080").await?;
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let server = MyServer::new();
match std::env::var("TRANSPORT").as_deref() {
Ok("http") => server.run_http("0.0.0.0:8080").await?,
Ok("websocket") => server.run_websocket("0.0.0.0:8080").await?,
Ok("tcp") => server.run_tcp("0.0.0.0:8080").await?,
Ok("unix") => server.run_unix("/tmp/mcp.sock").await?,
_ => server.run_stdio().await?, // Default
}
Ok(())
}
TurboMCP v1.1.0 introduces comprehensive shared wrapper system that eliminates Arc/Mutex complexity from public APIs:
use turbomcp_client::{Client, SharedClient};
use turbomcp_transport::StdioTransport;
// Create shared client for concurrent access
let transport = StdioTransport::new();
let client = Client::new(transport);
let shared = SharedClient::new(client);
// Initialize once
shared.initialize().await?;
// Clone for concurrent usage
let shared1 = shared.clone();
let shared2 = shared.clone();
// Both tasks can access the client concurrently
let handle1 = tokio::spawn(async move {
shared1.list_tools().await
});
let handle2 = tokio::spawn(async move {
shared2.list_prompts().await
});
let (tools, prompts) = tokio::join!(handle1, handle2);
use turbomcp_transport::{StdioTransport, SharedTransport};
// Wrap any transport for sharing
let transport = StdioTransport::new();
let shared = SharedTransport::new(transport);
// Connect once
shared.connect().await?;
// Share across tasks
let shared1 = shared.clone();
let shared2 = shared.clone();
let handle1 = tokio::spawn(async move {
shared1.send(message).await
});
let handle2 = tokio::spawn(async move {
shared2.receive().await
});
use turbomcp_core::shared::{Shared, ConsumableShared};
// Any type can be made shareable
let counter = MyCounter::new();
let shared = Shared::new(counter);
// Use with closures for fine-grained control
shared.with_mut(|c| c.increment()).await;
let value = shared.with(|c| c.get()).await;
// Consumable variant for one-time use
let server = MyServer::new();
let shared = ConsumableShared::new(server);
let server = shared.consume().await?; // Extracts the value
Use the mcp_error!
macro for easy error creation:
#[tool("Divide numbers")]
async fn divide(&self, a: f64, b: f64) -> McpResult<f64> {
if b == 0.0 {
return Err(mcp_error!("Division by zero: {} / {}", a, b));
}
Ok(a / b)
}
#[tool("Read file")]
async fn read_file(&self, path: String) -> McpResult<String> {
tokio::fs::read_to_string(&path).await
.map_err(|e| mcp_error!("Failed to read file {}: {}", path, e))
}
TurboMCP provides comprehensive error types:
use turbomcp::McpError;
match result {
Err(McpError::InvalidInput(msg)) => {
// Handle validation errors
},
Err(McpError::Unauthorized(msg)) => {
// Handle authentication errors
},
Err(McpError::Resource(msg)) => {
// Handle resource access errors
},
Err(McpError::Transport(msg)) => {
// Handle transport errors
},
Ok(value) => {
// Process success case
}
}
TurboMCP automatically generates JSON schemas for custom types:
use serde::{Serialize, Deserialize};
#[derive(Serialize, Deserialize)]
struct CreateUserRequest {
name: String,
email: String,
age: Option<u32>,
}
#[derive(Serialize, Deserialize)]
struct User {
id: u64,
name: String,
email: String,
created_at: chrono::DateTime<chrono::Utc>,
}
#[tool("Create a new user")]
async fn create_user(&self, request: CreateUserRequest) -> McpResult<User> {
// Schema automatically generated for both types
let user = User {
id: generate_id(),
name: request.name,
email: request.email,
created_at: chrono::Utc::now(),
};
// Save to database
self.database.save_user(&user).await?;
Ok(user)
}
Handle shutdown signals gracefully:
use tokio::signal;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let server = MyServer::new();
let (server, shutdown_handle) = server.into_server_with_shutdown()?;
let server_task = tokio::spawn(async move {
server.run_stdio().await
});
signal::ctrl_c().await?;
// NOTE: For STDIO transport, avoid logging to prevent JSON-RPC pollution
// For other transports, you could use: tracing::info!("Shutdown signal received");
shutdown_handle.shutdown().await;
server_task.await??;
Ok(())
}
Enable SIMD acceleration for maximum performance:
[dependencies]
turbomcp = { version = "1.1.0", features = ["simd"] }
Configure performance settings:
use turbomcp_core::{SessionManager, SessionConfig};
let config = SessionConfig::high_performance()
.with_simd_acceleration(true)
.with_connection_pooling(true)
.with_circuit_breakers(true);
let server = MyServer::new()
.with_session_config(config)
.with_compression(true);
Test your tools directly:
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_calculator() {
let calc = Calculator;
let result = calc.test_tool_call("add", serde_json::json!({
"a": 5,
"b": 3
})).await.unwrap();
assert_eq!(result, serde_json::json!(8));
}
}
Use the TurboMCP CLI for integration testing:
# Install CLI
cargo install turbomcp-cli
# Test server functionality
turbomcp-cli tools-list --url http://localhost:8080/mcp
turbomcp-cli tools-call --url http://localhost:8080/mcp --name add --arguments '{"a": 5, "b": 3}'
turbomcp-cli schema-export --url http://localhost:8080/mcp --output schemas.json
Add to your Claude Desktop configuration:
{
"mcpServers": {
"my-turbomcp-server": {
"command": "/path/to/your/server/binary",
"args": []
}
}
}
Use the TurboMCP client:
use turbomcp_client::{ClientBuilder, Transport};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = ClientBuilder::new()
.transport(Transport::stdio_with_command("./my-server"))
.connect().await?;
let tools = client.list_tools().await?;
println!("Available tools: {:?}", tools);
let result = client.call_tool("add", serde_json::json!({
"a": 5,
"b": 3
})).await?;
println!("Result: {:?}", result);
Ok(())
}
Explore comprehensive examples in the examples/
directory:
# Basic calculator server
cargo run --example 01_basic_calculator
# File system tools
cargo run --example 02_file_tools
# Database integration
cargo run --example 03_database_server
# Web scraping tools
cargo run --example 04_web_tools
# Authentication with OAuth 2.0
cargo run --example 09_oauth_authentication
# HTTP server with advanced features
cargo run --example 10_http_server
Feature | Description | Default |
---|---|---|
simd |
Enable SIMD acceleration for JSON processing | ❌ |
oauth |
Enable OAuth 2.0 authentication | ✅ |
metrics |
Enable metrics collection and endpoints | ✅ |
compression |
Enable response compression | ✅ |
all-transports |
Enable all transport protocols | ✅ |
minimal |
Minimal build (STDIO only) | ❌ |
# Build with all features
cargo build --all-features
# Build optimized for production
cargo build --release --features simd
# Run tests
cargo test --workspace
git checkout -b feature-name
make test
TurboMCP's compile-time first design philosophy delivers measurable performance advantages over traditional MCP implementations:
#[server]
, #[tool]
, and #[resource]
macros pre-compute all metadata at build timeserde_json
with SIMD accelerationCompile-Time vs Runtime Trade-offs:
// Other frameworks: Runtime overhead
fn handle_tool(name: &str, args: Value) {
let schema = generate_schema(name); // Runtime cost
let handler = lookup_handler(name); // HashMap lookup
validate_parameters(args, schema); // Runtime validation
// ...
}
// TurboMCP: Compile-time optimization
#[tool("Add numbers")] // Pre-computed at build
async fn add(&self, a: i32, b: i32) -> McpResult<i32> {
Ok(a + b) // Direct dispatch, no lookup
}
Key Differentiators:
While runtime-based frameworks offer flexibility, TurboMCP's compile-time approach provides:
The TurboMCP Philosophy: "Handle complexity at compile time so runtime can be blazingly fast."
# Run performance benchmarks
cargo bench
# Test SIMD acceleration
cargo run --example simd_performance --features simd
# Profile memory usage
cargo run --example memory_profile
Licensed under the MIT License.
Built with ❤️ by the TurboMCP team. Ready for production, optimized for performance, designed for developers.