| Crates.io | oxify-connect-llm |
| lib.rs | oxify-connect-llm |
| version | 0.1.0 |
| created_at | 2026-01-19 05:03:13.815932+00 |
| updated_at | 2026-01-19 05:03:13.815932+00 |
| description | LLM provider connectors for OxiFY - OpenAI, Anthropic, Cohere, Ollama support |
| homepage | |
| repository | https://github.com/cool-japan/oxify |
| max_upload_size | |
| id | 2053732 |
| size | 647,904 |
The Ecosystem - LLM Provider Integrations for OxiFY
oxify-connect-llm provides a unified, type-safe interface for multiple LLM providers. Part of OxiFY's Connector Strategy for mass-producing integrations via macros and trait abstractions.
Status: ✅ Phase 2 Complete - OpenAI & Anthropic Production Ready Roadmap: AWS Bedrock, Google Gemini, Cohere, Mistral, Ollama (Phase 3) Part of: OxiFY Enterprise Architecture (Codename: Absolute Zero)
#[async_trait]
pub trait LlmProvider: Send + Sync {
async fn complete(&self, request: LlmRequest) -> Result<LlmResponse>;
}
use oxify_connect_llm::{OpenAIProvider, LlmRequest};
let provider = OpenAIProvider::new(
api_key.to_string(),
"gpt-4".to_string()
);
let response = provider.complete(LlmRequest {
prompt: "What is the capital of France?".to_string(),
system_prompt: Some("You are a geography expert.".to_string()),
temperature: Some(0.7),
max_tokens: Some(100),
}).await?;
println!("Response: {}", response.content);
use oxify_connect_llm::{AnthropicProvider, LlmRequest};
let provider = AnthropicProvider::new(
api_key.to_string(),
"claude-3-opus-20240229".to_string()
);
let response = provider.complete(LlmRequest {
prompt: "What is the capital of France?".to_string(),
system_prompt: Some("You are a geography expert.".to_string()),
temperature: Some(0.7),
max_tokens: Some(100),
}).await?;
println!("Response: {}", response.content);
let provider = OllamaProvider::new(
"http://localhost:11434".to_string(),
"llama2".to_string()
);
Automatic retry with exponential backoff for:
let mut stream = provider.complete_stream(request).await?;
while let Some(chunk) = stream.next().await {
print!("{}", chunk.content);
}
let usage = response.usage.unwrap();
println!("Prompt tokens: {}", usage.prompt_tokens);
println!("Completion tokens: {}", usage.completion_tokens);
println!("Total tokens: {}", usage.total_tokens);
Cache responses to reduce costs:
let provider = OpenAIProvider::new(api_key, model)
.with_cache(RedisCache::new(redis_url));
pub struct OpenAIProvider {
api_key: String,
model: String,
base_url: Option<String>, // For Azure OpenAI
organization: Option<String>,
}
pub struct LlmRequest {
pub prompt: String,
pub system_prompt: Option<String>,
pub temperature: Option<f64>, // 0.0 - 2.0
pub max_tokens: Option<u32>,
pub top_p: Option<f64>,
pub frequency_penalty: Option<f64>,
pub presence_penalty: Option<f64>,
pub stop: Option<Vec<String>>,
}
pub enum LlmError {
ApiError(String), // Provider API error
ConfigError(String), // Invalid configuration
SerializationError(String),// JSON serialization error
NetworkError(reqwest::Error), // Network error
RateLimited, // Rate limit exceeded
InvalidRequest(String), // Invalid request parameters
}
Mock provider for testing:
use oxify_connect_llm::MockProvider;
let provider = MockProvider::new()
.with_response("Paris is the capital of France.");
let response = provider.complete(request).await?;
assert_eq!(response.content, "Paris is the capital of France.");
oxify-model: LlmConfig definitionoxify-engine: Workflow execution