| Crates.io | cnctd_ai |
| lib.rs | cnctd_ai |
| version | 0.1.13 |
| created_at | 2025-10-01 16:57:41.051846+00 |
| updated_at | 2026-01-09 16:32:53.937416+00 |
| description | AI and LLM utilities |
| homepage | |
| repository | https://github.com/Connected-Dot/cnctd_server |
| max_upload_size | |
| id | 1862990 |
| size | 530,687 |
A Rust abstraction layer for AI/LLM providers (Anthropic Claude, OpenAI) with integrated MCP (Model Context Protocol) support and autonomous agent framework.
Add to your Cargo.toml:
[dependencies]
cnctd_ai = "0.1.5"
The easiest way to build autonomous AI applications:
use cnctd_ai::{Agent, Client, AnthropicConfig, McpGateway};
// Setup client and gateway
let client = Client::anthropic(
AnthropicConfig {
api_key: "your-key".into(),
model: "claude-sonnet-4-20250514".into(),
version: None,
},
None,
)?;
let gateway = McpGateway::new("https://mcp.cnctd.world");
// Create agent with default settings
let agent = Agent::new(&client).with_gateway(&gateway);
// Run autonomous task - agent will use tools as needed
let trace = agent.run_simple(
"Research the latest Rust async trends and summarize key findings"
).await?;
// View results
trace.print_summary();
For advanced configuration:
let agent = Agent::builder(&client)
.max_iterations(10)
.max_duration(Duration::from_secs(300))
.system_prompt("You are a helpful research assistant.")
.gateway(&gateway)
.build();
See Agent Framework Documentation for more details.
use cnctd_ai::{Client, AnthropicConfig, Message, CompletionRequest};
let client = Client::anthropic(
AnthropicConfig {
api_key: "your-api-key".into(),
model: "claude-sonnet-4-20250514".into(),
version: None,
},
None,
)?;
let request = CompletionRequest {
messages: vec![Message::user("Hello, how are you?")],
tools: None,
options: None,
};
let response = client.complete(request).await?;
println!("Response: {}", response.text());
use cnctd_ai::{Client, AnthropicConfig, Message, CompletionRequest};
let mut stream = client.complete_stream(request).await?;
while let Some(chunk) = stream.next().await {
let chunk = chunk?;
if let Some(text) = chunk.text() {
print!("{}", text);
}
}
use cnctd_ai::{Client, Message, CompletionRequest, create_tool};
use serde_json::json;
// Create a tool using the helper function
let weather_tool = create_tool(
"get_weather",
"Get the current weather for a location",
json!({
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
})
)?;
let mut request = CompletionRequest {
messages: vec![Message::user("What's the weather in SF?")],
tools: None,
options: None,
};
request.add_tool(weather_tool);
let response = client.complete(request).await?;
// Check if model wants to use a tool
if let Some(tool_use) = response.tool_use() {
println!("Tool: {}", tool_use.name);
println!("Arguments: {}", tool_use.input);
// Execute tool and continue conversation
// See examples/tool_calling.rs for full implementation
}
use cnctd_ai::McpGateway;
let gateway = McpGateway::new("https://mcp.cnctd.world");
// List available servers
let servers = gateway.list_servers().await?;
// List tools from a specific server
let tools = gateway.list_tools("brave-search").await?;
// Execute a tool
let result = gateway.call_tool(
"brave-search",
"brave_web_search",
Some(json!({"query": "Rust programming"})),
).await?;
The repository includes several examples:
Agent Framework:
agent_simple.rs - Minimal agent setupagent_basic.rs - Full-featured agent with configurationCore Functionality:
basic_completion.rs - Simple completion examplestreaming.rs - Streaming responsestool_calling.rs - Function/tool callingtool_calling_streaming.rs - Tool calling with streamingconversation.rs - Multi-turn conversationserror_handling.rs - Error handling patternsmcp_gateway.rs - MCP gateway integrationRun examples with:
cargo run --example agent_simple
cargo run --example basic_completion
Set these for the examples:
ANTHROPIC_API_KEY=your-anthropic-key
OPENAI_API_KEY=your-openai-key
GATEWAY_URL=https://mcp.cnctd.world # Optional
GATEWAY_TOKEN=your-token # Optional
The library provides helper functions for easier tool creation:
use cnctd_ai::{create_tool, create_tool_borrowed};
// For owned strings (runtime data)
let tool = create_tool(name, description, schema)?;
// For static strings (compile-time constants)
let tool = create_tool_borrowed(name, description, schema)?;
cnctd_ai uses OpenAI's newer Responses API (/v1/responses) for GPT-4, GPT-4.1, GPT-5, and reasoning models (o1, o3). This provides better tool calling support but has specific requirements for multi-turn conversations:
call_id: OpenAI uses call_id (format: call_...) to match function_call items with their function_call_output responsesToolUse.call_id: The library captures this from API responses and stores it in ToolUse.call_idToolResult.effective_call_id(): Returns the correct ID to use when sending tool results backReasoning models require special handling for multi-turn tool calls:
reasoning.encrypted_content for reasoning modelsMessage.reasoning_itemsThe library handles this automatically - just ensure you preserve reasoning_items when building continuation messages:
// After getting a response with tool calls
let response = client.complete(request).await?;
// The response.message includes reasoning_items if present
// When building the next request, include the full message
messages.push(response.message.clone());
// Add tool results
let tool_result = ToolResult::new(tool_use.call_id.unwrap_or(tool_use.id), output);
messages.push(Message::tool_results(vec![tool_result]));
When persisting and reconstructing conversations from a database:
call_id: Save both tool_use_id and call_id from tool callsfunction_call must have a matching function_call_outputreasoning_items for reasoning modelsThe library provides comprehensive error types:
use cnctd_ai::Error;
match client.complete(request).await {
Ok(response) => { /* handle success */ },
Err(Error::AuthenticationFailed(msg)) => { /* handle auth */ },
Err(Error::RateLimited { retry_after }) => { /* handle rate limit */ },
Err(Error::ProviderError { provider, message, status_code }) => { /* handle provider error */ },
Err(e) => { /* handle other errors */ },
}
MIT License - see LICENSE file for details.