synqly

Crates.iosynqly
lib.rssynqly
version0.1.0
created_at2026-01-08 03:05:11.356333+00
updated_at2026-01-08 03:05:11.356333+00
descriptionOfficial Rust client for the Synqly API - a unified LLM gateway
homepage
repositoryhttps://github.com/onoja123/synqly-rust
max_upload_size
id2029452
size75,033
Okpe onoja (onoja123)

documentation

README

Synqly Rust SDK

One API for Every AI Model

Official Rust client for the Synqly API — a unified LLM gateway that lets you interact with multiple AI providers (OpenAI, Anthropic, Google, and more) using a single, consistent interface.

✨ Features

  • Unified access to OpenAI, Anthropic, Google, and more
  • Simple, idiomatic Rust API
  • Multi-turn conversations
  • Configurable parameters (temperature, max tokens, etc.)
  • Built-in error handling with thiserror
  • Uses Synqly production endpoints by default
  • No vendor lock-in
  • Async/await with Tokio

Installation

Add this to your Cargo.toml:

[dependencies]
synqly = "0.1.0"
tokio = { version = "1.0", features = ["full"] }

Quick Start

use synqly::{Client, Config, ChatCreateParams, Message};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client = Client::new(Config {
        api_key: "YOUR_API_KEY".to_string(),
        base_url: None,
    });

    let response = client.chat().create(ChatCreateParams {
        provider: Some("openai".to_string()),
        model: "gpt-4".to_string(),
        messages: vec![
            Message {
                role: "user".to_string(),
                content: "Hello!".to_string(),
            }
        ],
        temperature: None,
        max_tokens: None,
        top_p: None,
    }).await?;

    println!("{}", response.content);
    Ok(())
}

Usage

Basic Chat Completion

let response = client.chat().create(ChatCreateParams {
    provider: Some("openai".to_string()),
    model: "gpt-4".to_string(),
    messages: vec![
        Message {
            role: "user".to_string(),
            content: "What is the capital of France?".to_string(),
        }
    ],
    temperature: None,
    max_tokens: None,
    top_p: None,
}).await?;

println!("{}", response.content);

Chat with Parameters

let response = client.chat().create(ChatCreateParams {
    provider: Some("anthropic".to_string()),
    model: "claude-sonnet-4".to_string(),
    messages: vec![
        Message {
            role: "system".to_string(),
            content: "You are a helpful assistant.".to_string(),
        },
        Message {
            role: "user".to_string(),
            content: "Explain quantum computing in simple terms.".to_string(),
        }
    ],
    temperature: Some(0.7),
    max_tokens: Some(500),
    top_p: None,
}).await?;

println!("{}", response.content);
println!("Tokens used: {}", response.usage.total_tokens);

Switching Providers

// OpenAI
let response = client.chat().create(ChatCreateParams {
    provider: Some("openai".to_string()),
    model: "gpt-4".to_string(),
    messages: messages.clone(),
    ..Default::default()
}).await?;

// Anthropic
let response = client.chat().create(ChatCreateParams {
    provider: Some("anthropic".to_string()),
    model: "claude-sonnet-4".to_string(),
    messages: messages.clone(),
    ..Default::default()
}).await?;

// Google
let response = client.chat().create(ChatCreateParams {
    provider: Some("google".to_string()),
    model: "gemini-pro".to_string(),
    messages: messages,
    ..Default::default()
}).await?;

API Reference

Client::new(config: Config)

Creates a new Synqly client.

Field Type Required Description
api_key String Yes Your Synqly API key
base_url Option<String> No Custom base URL (defaults to production)

client.chat().create(params: ChatCreateParams)

Creates a chat completion.

Field Type Required Description
provider Option<String> No AI provider (openai, anthropic, google)
model String Yes Model name
messages Vec<Message> Yes Conversation messages
temperature Option<f64> No Sampling temperature (0.0-2.0)
max_tokens Option<i32> No Max tokens in response
top_p Option<f64> No Nucleus sampling

Response

pub struct ChatResponse {
    pub id: String,
    pub provider: String,
    pub model_type: String,
    pub content: String,
    pub usage: Usage,
    pub finish_reason: String,
    pub created_at: String,
}

impl ChatResponse {
    pub fn get_content(&self) -> &str;
}

Supported Providers

Provider Models
openai gpt-4, gpt-4-turbo, gpt-3.5-turbo
anthropic claude-sonnet-4, claude-3-opus, claude-3-haiku
google gemini-pro, gemini-ultra

Error Handling

The SDK uses thiserror for structured error handling:

use synqly::Error;

match client.chat().create(params).await {
    Ok(response) => {
        println!("Success: {}", response.content);
    }
    Err(Error::ApiError { status_code, message }) => {
        eprintln!("API Error {}: {}", status_code, message);
    }
    Err(Error::ValidationError(msg)) => {
        eprintln!("Validation Error: {}", msg);
    }
    Err(e) => {
        eprintln!("Error: {}", e);
    }
}

Examples

Run the examples with:

export SYNQLY_API_KEY=your_api_key_here
cargo run --example basic

Get an API Key

  1. Visit synqly.xyz
  2. Sign up and create an API key
  3. Use the key in your application

Contributing

Contributions are welcome! Feel free to open an issue or submit a pull request.

License

MIT

Project Structure

synqly-rust/
├── Cargo.toml
├── README.md
├── src/
│   ├── lib.rs          # Main library entry point
│   ├── client.rs       # HTTP client implementation
│   ├── chat.rs         # Chat service
│   ├── types.rs        # Type definitions
│   └── error.rs        # Error types
├── examples/
│   └── basic.rs        # Basic usage example
└── tests/
    └── integration_test.rs

Requirements

  • Rust 1.70 or higher
  • Tokio runtime for async operations

Related Projects

Support

For questions and support, please visit synqly.xyz or open an issue on GitHub.

Commit count: 5

cargo fmt