| Crates.io | edgee |
| lib.rs | edgee |
| version | 2.0.1 |
| created_at | 2024-06-15 23:09:38.22934+00 |
| updated_at | 2026-01-14 06:46:26.018377+00 |
| description | Rust SDK for the Edgee AI Gateway |
| homepage | https://www.edgee.cloud |
| repository | https://github.com/edgee-cloud/rust-sdk |
| max_upload_size | |
| id | 1273181 |
| size | 101,476 |
Modern, type-safe Rust SDK for the Edgee AI Gateway.
Add this to your Cargo.toml:
[dependencies]
edgee = "2.0"
tokio = { version = "1", features = ["full"] }
use edgee::Edgee;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Edgee::from_env()?;
let response = client.send("gpt-4o", "What is the capital of France?").await?;
println!("{}", response.text().unwrap_or(""));
// "The capital of France is Paris."
Ok(())
}
The send() method makes non-streaming chat completion requests:
let response = client.send("gpt-4o", "Hello, world!").await?;
// Access response
println!("{}", response.text().unwrap_or("")); // Text content
println!("{:?}", response.finish_reason()); // Finish reason
if let Some(tool_calls) = response.tool_calls() { // Tool calls (if any)
println!("{:?}", tool_calls);
}
The stream() method enables real-time streaming responses:
use tokio_stream::StreamExt;
let mut stream = client.stream("gpt-4o", "Tell me a story").await?;
while let Some(result) = stream.next().await {
match result {
Ok(chunk) => {
if let Some(text) = chunk.text() {
print!("{}", text);
}
if let Some(reason) = chunk.finish_reason() {
println!("\nFinished: {}", reason);
}
}
Err(e) => eprintln!("Error: {}", e),
}
}
Stream traitFor complete documentation, examples, and API reference, visit:
👉 Official Rust SDK Documentation
The documentation includes:
Run the examples to see the SDK in action:
export EDGEE_API_KEY="your-api-key"
cargo run --example simple
cargo run --example streaming
cargo run --example tools
Licensed under the Apache License, Version 2.0. See LICENSE for details.