| Crates.io | callix |
| lib.rs | callix |
| version | 0.1.1 |
| created_at | 2025-12-15 20:32:45.069713+00 |
| updated_at | 2025-12-29 18:03:13.909301+00 |
| description | A flexible HTTP client library for calling various AI APIs or somthing with configuration and templating support |
| homepage | |
| repository | https://github.com/naseridev/callix |
| max_upload_size | |
| id | 1986686 |
| size | 185,276 |
A flexible, configuration-driven HTTP client library for Rust, designed for seamless integration with AI APIs and RESTful services.
Add Callix to your Cargo.toml:
[dependencies]
callix = "0.1.0"
tokio = { version = "1", features = ["full"] }
serde_json = "1.0"
Create a new Rust project:
cargo new my_callix_project
cd my_callix_project
Add dependencies using cargo add (Rust 1.62+):
cargo add callix
cargo add tokio --features full
cargo add serde_json
Or manually in Cargo.toml:
[dependencies]
callix = "0.1.0"
tokio = { version = "1", features = ["full"] }
serde_json = "1.0"
Write your first code in src/main.rs:
use callix::CallixBuilder;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let callix = CallixBuilder::new()
.timeout(Duration::from_secs(60))
.build()?;
println!("Callix is ready!");
Ok(())
}
Run your project:
cargo run
For secure API key management, create a .env file:
# .env
OPENAI_API_KEY=sk-your-key-here
GEMINI_API_KEY=your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
Add the dotenv crate:
cargo add dotenv
Load environment variables in your code:
use dotenv::dotenv;
use std::env;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
dotenv().ok();
let api_key = env::var("OPENAI_API_KEY")?;
// Use api_key...
Ok(())
}
use callix::CallixBuilder;
use serde_json::json;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a client
let callix = CallixBuilder::new()
.timeout(Duration::from_secs(60))
.retries(3)
.retry_delay(Duration::from_secs(1))
.build()?;
// Make a request to Gemini
let response = callix
.request("gemini", "generate")?
.var("API_KEY", "your-api-key")
.var("model", "gemini-2.0-flash-exp")
.var("prompt", "Hello, world!")
.send()
.await?;
// Handle the response
if response.is_success() {
let body = response.text().await?;
println!("Response: {}", body);
}
Ok(())
}
use serde_json::json;
let response = callix
.request("openai", "chat")?
.var("API_KEY", "sk-...")
.var("model", "gpt-4")
.var("messages", json!([
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain Rust ownership in simple terms"}
]))
.send()
.await?;
let json: serde_json::Value = response.json().await?;
println!("{}", json["choices"][0]["message"]["content"]);
let response = callix
.request("anthropic", "messages")?
.var("API_KEY", "sk-ant-...")
.var("model", "claude-3-5-sonnet-20241022")
.var("max_tokens", 1024)
.var("messages", json!([
{"role": "user", "content": "Explain quantum computing"}
]))
.send()
.await?;
Callix includes pre-configured settings for popular AI providers:
Create a config.yaml file to define your own API endpoints:
providers:
my_api:
base_url: "https://api.example.com"
headers:
Authorization: "Bearer {{API_KEY}}"
Content-Type: "application/json"
timeout: 30 # seconds (optional)
endpoints:
predict:
path: "/v1/predict"
method: "POST"
body_template: |
{
"input": "{{text}}",
"model": "{{model}}",
"temperature": {{temperature}}
}
query_params:
version: "{{api_version}}"
Use your custom configuration:
let callix = CallixBuilder::new()
.config("config.yaml")
.build()?;
let response = callix
.request("my_api", "predict")?
.var("API_KEY", "secret")
.var("text", "Hello")
.var("model", "v2")
.var("temperature", 0.7)
.var("api_version", "latest")
.send()
.await?;
let response = callix
.request("openai", "chat")?
.var("API_KEY", "sk-...")
.var("model", "gpt-4")
.var("messages", json!([
{"role": "user", "content": "Hello"}
]))
.header("X-Custom-Header", "value")
.header("X-Request-ID", "12345")
.send()
.await?;
let callix = CallixBuilder::new()
.retries(5)
.retry_delay(Duration::from_secs(2))
.timeout(Duration::from_secs(120))
.build()?;
match response.status() {
200..=299 => {
let json: serde_json::Value = response.json().await?;
println!("Success: {:#?}", json);
}
400 => println!("Bad Request - check your input"),
401 => println!("Unauthorized - verify your API key"),
429 => println!("Rate Limited - please retry later"),
500..=599 => println!("Server Error - try again"),
_ => println!("Unexpected status: {}", response.status()),
}
let prompts = vec![
"Explain machine learning",
"What is quantum computing?",
"Describe neural networks"
];
for prompt in prompts {
let response = callix
.request("openai", "chat")?
.var("API_KEY", api_key)
.var("model", "gpt-3.5-turbo")
.var("messages", json!([
{"role": "user", "content": prompt}
]))
.send()
.await?;
// Process response
let json: serde_json::Value = response.json().await?;
println!("{}: {}", prompt, json["choices"][0]["message"]["content"]);
// Rate limiting
tokio::time::sleep(Duration::from_millis(500)).await;
}
use futures::future::join_all;
let prompts = vec!["Prompt 1", "Prompt 2", "Prompt 3"];
let futures: Vec<_> = prompts.iter().map(|&prompt| {
let callix = callix.clone();
let api_key = api_key.clone();
async move {
callix
.request("openai", "chat")?
.var("API_KEY", api_key)
.var("model", "gpt-3.5-turbo")
.var("messages", json!([
{"role": "user", "content": prompt}
]))
.send()
.await
}
}).collect();
let results = join_all(futures).await;
openai.rs - OpenAI ChatGPT integrationgemini.rs - Google Gemini APIanthropic.rs - Anthropic Claude APIcargo run --example openai
cargo run --example gemini
cargo run --example anthropic
Customize Callix with feature flags:
[dependencies]
callix = { version = "0.1", features = ["rustls-tls", "gzip"] }
| Feature | Description | Default |
|---|---|---|
native-tls |
Use system's native TLS | ✓ |
rustls-tls |
Use Rustls (pure Rust TLS) | ✗ |
blocking |
Blocking HTTP client support | ✗ |
cookies |
Cookie store support | ✗ |
gzip |
Gzip compression | ✗ |
brotli |
Brotli compression | ✗ |
stream |
Streaming response support | ✗ |
CallixBuilder → Callix → RequestBuilder → HTTP Request → CallixResponse
↓ ↓
Config TemplateEngine
client - Main client implementation and HTTP method parsingconfig - Configuration management and provider definitionsrequest - Request building and execution with retry logicresponse - Response handling and parsing utilitiestemplate - Variable substitution and template renderingerror - Comprehensive error types and conversionsCallix requires Rust 1.75 or higher.
Contributions are welcome! Here's how to get started:
git checkout -b feature/amazing-featuregit commit -m 'Add amazing feature'git push origin feature/amazing-featuregit clone https://github.com/naseridev/callix.git
cd callix
cargo build
cargo test
cargo fmt and cargo clippy before committingBuilt with these amazing Rust crates: