| Crates.io | synqly |
| lib.rs | synqly |
| version | 0.1.0 |
| created_at | 2026-01-08 03:05:11.356333+00 |
| updated_at | 2026-01-08 03:05:11.356333+00 |
| description | Official Rust client for the Synqly API - a unified LLM gateway |
| homepage | |
| repository | https://github.com/onoja123/synqly-rust |
| max_upload_size | |
| id | 2029452 |
| size | 75,033 |
One API for Every AI Model
Official Rust client for the Synqly API — a unified LLM gateway that lets you interact with multiple AI providers (OpenAI, Anthropic, Google, and more) using a single, consistent interface.
thiserrorAdd this to your Cargo.toml:
[dependencies]
synqly = "0.1.0"
tokio = { version = "1.0", features = ["full"] }
use synqly::{Client, Config, ChatCreateParams, Message};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::new(Config {
api_key: "YOUR_API_KEY".to_string(),
base_url: None,
});
let response = client.chat().create(ChatCreateParams {
provider: Some("openai".to_string()),
model: "gpt-4".to_string(),
messages: vec![
Message {
role: "user".to_string(),
content: "Hello!".to_string(),
}
],
temperature: None,
max_tokens: None,
top_p: None,
}).await?;
println!("{}", response.content);
Ok(())
}
let response = client.chat().create(ChatCreateParams {
provider: Some("openai".to_string()),
model: "gpt-4".to_string(),
messages: vec![
Message {
role: "user".to_string(),
content: "What is the capital of France?".to_string(),
}
],
temperature: None,
max_tokens: None,
top_p: None,
}).await?;
println!("{}", response.content);
let response = client.chat().create(ChatCreateParams {
provider: Some("anthropic".to_string()),
model: "claude-sonnet-4".to_string(),
messages: vec![
Message {
role: "system".to_string(),
content: "You are a helpful assistant.".to_string(),
},
Message {
role: "user".to_string(),
content: "Explain quantum computing in simple terms.".to_string(),
}
],
temperature: Some(0.7),
max_tokens: Some(500),
top_p: None,
}).await?;
println!("{}", response.content);
println!("Tokens used: {}", response.usage.total_tokens);
// OpenAI
let response = client.chat().create(ChatCreateParams {
provider: Some("openai".to_string()),
model: "gpt-4".to_string(),
messages: messages.clone(),
..Default::default()
}).await?;
// Anthropic
let response = client.chat().create(ChatCreateParams {
provider: Some("anthropic".to_string()),
model: "claude-sonnet-4".to_string(),
messages: messages.clone(),
..Default::default()
}).await?;
// Google
let response = client.chat().create(ChatCreateParams {
provider: Some("google".to_string()),
model: "gemini-pro".to_string(),
messages: messages,
..Default::default()
}).await?;
Client::new(config: Config)Creates a new Synqly client.
| Field | Type | Required | Description |
|---|---|---|---|
api_key |
String |
Yes | Your Synqly API key |
base_url |
Option<String> |
No | Custom base URL (defaults to production) |
client.chat().create(params: ChatCreateParams)Creates a chat completion.
| Field | Type | Required | Description |
|---|---|---|---|
provider |
Option<String> |
No | AI provider (openai, anthropic, google) |
model |
String |
Yes | Model name |
messages |
Vec<Message> |
Yes | Conversation messages |
temperature |
Option<f64> |
No | Sampling temperature (0.0-2.0) |
max_tokens |
Option<i32> |
No | Max tokens in response |
top_p |
Option<f64> |
No | Nucleus sampling |
pub struct ChatResponse {
pub id: String,
pub provider: String,
pub model_type: String,
pub content: String,
pub usage: Usage,
pub finish_reason: String,
pub created_at: String,
}
impl ChatResponse {
pub fn get_content(&self) -> &str;
}
| Provider | Models |
|---|---|
openai |
gpt-4, gpt-4-turbo, gpt-3.5-turbo |
anthropic |
claude-sonnet-4, claude-3-opus, claude-3-haiku |
google |
gemini-pro, gemini-ultra |
The SDK uses thiserror for structured error handling:
use synqly::Error;
match client.chat().create(params).await {
Ok(response) => {
println!("Success: {}", response.content);
}
Err(Error::ApiError { status_code, message }) => {
eprintln!("API Error {}: {}", status_code, message);
}
Err(Error::ValidationError(msg)) => {
eprintln!("Validation Error: {}", msg);
}
Err(e) => {
eprintln!("Error: {}", e);
}
}
Run the examples with:
export SYNQLY_API_KEY=your_api_key_here
cargo run --example basic
Contributions are welcome! Feel free to open an issue or submit a pull request.
MIT
synqly-rust/
├── Cargo.toml
├── README.md
├── src/
│ ├── lib.rs # Main library entry point
│ ├── client.rs # HTTP client implementation
│ ├── chat.rs # Chat service
│ ├── types.rs # Type definitions
│ └── error.rs # Error types
├── examples/
│ └── basic.rs # Basic usage example
└── tests/
└── integration_test.rs
For questions and support, please visit synqly.xyz or open an issue on GitHub.