| Crates.io | rainy-sdk |
| lib.rs | rainy-sdk |
| version | 0.6.1 |
| created_at | 2025-09-05 05:47:29.565844+00 |
| updated_at | 2026-01-24 06:48:12.942165+00 |
| description | Official Rust SDK for Rainy API by Enosis Labs v0.5.3 - Cowork API key validation, Gemini 3 models with advanced thinking capabilities, and full OpenAI compatibility |
| homepage | https://github.com/enosislabs/rainy-sdk |
| repository | https://github.com/enosislabs/rainy-sdk |
| max_upload_size | |
| id | 1825126 |
| size | 345,939 |
The official Rust SDK for the Rainy API by Enosis Labs - a unified interface for multiple AI providers including OpenAI, Google Gemini, Groq, Cerebras, and Enosis Labs' own Astronomer models. Features advanced thinking capabilities, multimodal support, thought signatures, and full OpenAI compatibility.
secrecy crateAdd this to your Cargo.toml:
[dependencies]
rainy-sdk = "0.5.3"
tokio = { version = "1.47", features = ["full"] }
Or installation with cargo:
cargo add rainy-sdk
Enable additional features as needed:
[dependencies]
rainy-sdk = { version = "0.5.1", features = ["rate-limiting", "tracing", "cowork"] }
Available features:
rate-limiting: Built-in rate limiting with the governor cratetracing: Request/response logging with the tracing cratecowork: Cowork integration for tier-based feature gating (enabled by default)Rainy SDK v0.5.1 provides 100% OpenAI API compatibility while extending support to additional providers. Use Rainy SDK as a drop-in replacement for the official OpenAI SDK:
use rainy_sdk::{models, ChatCompletionRequest, ChatMessage, RainyClient};
// Works exactly like OpenAI SDK
let client = RainyClient::with_api_key("your-rainy-api-key")?;
let request = ChatCompletionRequest::new(
models::model_constants::OPENAI_GPT_4O, // or GOOGLE_GEMINI_2_5_PRO
vec![ChatMessage::user("Hello!")]
)
.with_temperature(0.7)
.with_response_format(models::ResponseFormat::JsonObject);
let (response, metadata) = client.chat_completion(request).await?;
| Provider | Models | Features |
|---|---|---|
| OpenAI | gpt-4o, gpt-5, gpt-5-pro, o3, o4-mini |
โ Native OpenAI API |
| Google Gemini 3 | gemini-3-pro-preview, gemini-3-flash-preview, gemini-3-pro-image-preview |
โ Thinking, Thought Signatures, Multimodal |
| Google Gemini 2.5 | gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite |
โ Thinking, Dynamic Reasoning |
| Groq | llama-3.1-8b-instant, llama-3.3-70b-versatile |
โ OpenAI-compatible API |
| Cerebras | llama3.1-8b |
โ OpenAI-compatible API |
| Enosis Labs | astronomer-1, astronomer-1-max, astronomer-1.5, astronomer-2, astronomer-2-pro |
โ Native Rainy API |
tools and tool_choiceresponse_formatreasoning_effort parameter for Gemini modelslogprobs and top_logprobs supportRainy SDK supports advanced thinking capabilities for Google Gemini 3 and 2.5 series models, enabling deeper reasoning and thought preservation across conversations.
use rainy_sdk::{models, ChatCompletionRequest, ChatMessage, RainyClient, ThinkingConfig};
let request = ChatCompletionRequest::new(
models::model_constants::GOOGLE_GEMINI_3_PRO,
vec![ChatMessage::user("Solve this complex optimization problem step by step.")]
)
.with_thinking_config(ThinkingConfig::gemini_3(
models::ThinkingLevel::High, // High reasoning for complex tasks
true // Include thought summaries in response
));
let (response, metadata) = client.chat_completion(request).await?;
println!("Response: {}", response.choices[0].message.content);
// Access thinking token usage
if let Some(thinking_tokens) = metadata.thoughts_token_count {
println!("Thinking tokens used: {}", thinking_tokens);
}
Preserve reasoning context across conversation turns with encrypted thought signatures:
use rainy_sdk::{models::*, ChatMessage, EnhancedChatMessage};
let mut conversation = vec![
// Previous messages with thought signatures...
];
// New message with preserved reasoning context
let enhanced_message = EnhancedChatMessage::with_parts(
MessageRole::User,
vec![
ContentPart::text("Now apply this reasoning to the next problem..."),
// Include thought signature from previous response
ContentPart::with_thought_signature("encrypted_signature_here".to_string())
]
);
let config = ThinkingConfig::gemini_2_5(
-1, // Dynamic thinking budget
true // Include thoughts
);
let request = ChatCompletionRequest::new(
models::model_constants::GOOGLE_GEMINI_2_5_PRO,
messages
)
.with_thinking_config(config);
Built-in web search powered by Tavily for real-time information retrieval:
use rainy_sdk::search::{SearchOptions, SearchResponse};
let search_options = SearchOptions {
query: "latest developments in Rust programming".to_string(),
max_results: Some(10),
..Default::default()
};
let search_results = client.search_web(search_options).await?;
for result in search_results.results {
println!("{}: {}", result.title, result.url);
}
// Extract content from specific URLs
let extracted = client.extract_content(vec!["https://example.com/article".to_string()]).await?;
println!("Content: {}", extracted.content);
Tier-based feature gating with Free/GoPlus/Plus/Pro/ProPlus plans:
use rainy_sdk::{CoworkStatus, CoworkClient};
let cowork_client = CoworkClient::new(client);
let status = cowork_client.get_cowork_status().await?;
println!("Plan: {:?}", status.plan);
println!("Remaining uses: {}", status.usage.remaining_uses);
// Check feature availability
if status.can_use_web_research() {
// Enable web search features
}
if status.can_use_document_export() {
// Enable document generation
}
use rainy_sdk::{models, ChatCompletionRequest, ChatMessage, RainyClient};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
// Initialize client with your API key from environment variables
let api_key = std::env::var("RAINY_API_KEY").expect("RAINY_API_KEY not set");
let client = RainyClient::with_api_key(api_key)?;
// Simple chat completion
let response = client
.simple_chat(
models::model_constants::GPT_4O,
"Hello! Tell me a short story.",
)
.await?;
println!("Simple response: {}", response);
// Advanced usage with metadata
let request = ChatCompletionRequest::new(
models::model_constants::CLAUDE_SONNET_4,
vec![ChatMessage::user("Explain quantum computing in one sentence")],
)
.with_temperature(0.7)
.with_max_tokens(100);
let (response, metadata) = client.chat_completion(request).await?;
println!("\nAdvanced response: {}", response.choices[0].message.content);
println!("Provider: {:?}", metadata.provider.unwrap_or_default());
println!("Response time: {}ms", metadata.response_time.unwrap_or_default());
Ok(())
}
The SDK uses API key authentication. It's recommended to load the key from an environment variable.
use rainy_sdk::RainyClient;
// Load API key from environment and create client
let api_key = std::env::var("RAINY_API_KEY").expect("RAINY_API_KEY not set");
let client = RainyClient::with_api_key(api_key)?;
Verify the API status.
# use rainy_sdk::RainyClient;
# async fn example() -> Result<(), Box<dyn std::error::Error>> {
# let client = RainyClient::with_api_key("dummy")?;
let health = client.health_check().await?;
println!("API Status: {}", health.status);
# Ok(())
# }
Create a standard chat completion.
# use rainy_sdk::{RainyClient, ChatCompletionRequest, ChatMessage, models};
# async fn example() -> Result<(), Box<dyn std::error::Error>> {
# let client = RainyClient::with_api_key("dummy")?;
let messages = vec![
ChatMessage::system("You are a helpful assistant."),
ChatMessage::user("Explain quantum computing in simple terms."),
];
let request = ChatCompletionRequest::new(models::model_constants::GPT_4O, messages)
.with_max_tokens(500)
.with_temperature(0.7);
let (response, metadata) = client.chat_completion(request).await?;
if let Some(choice) = response.choices.first() {
println!("Response: {}", choice.message.content);
}
# Ok(())
# }
Receive the response as a stream of events.
# use rainy_sdk::{RainyClient, ChatCompletionRequest, ChatMessage, models};
# use futures::StreamExt;
# async fn example() -> Result<(), Box<dyn std::error::Error>> {
# let client = RainyClient::with_api_key("dummy")?;
let request = ChatCompletionRequest::new(
models::model_constants::LLAMA_3_1_8B_INSTANT,
vec![ChatMessage::user("Write a haiku about Rust programming")],
)
.with_stream(true);
let mut stream = client.create_chat_completion_stream(request).await?;
while let Some(chunk) = stream.next().await {
match chunk {
Ok(response) => {
if let Some(choice) = response.choices.first() {
print!("{}", choice.message.content);
}
}
Err(e) => eprintln!("\nError in stream: {}", e),
}
}
# Ok(())
# }
Get credit and usage statistics.
# use rainy_sdk::RainyClient;
# async fn example() -> Result<(), Box<dyn std::error::Error>> {
# let client = RainyClient::with_api_key("dummy")?;
// Get credit stats
let credits = client.get_credit_stats(None).await?;
println!("Current credits: {}", credits.current_credits);
// Get usage stats for the last 7 days
let usage = client.get_usage_stats(Some(7)).await?;
println!("Total requests (last 7 days): {}", usage.total_requests);
# Ok(())
# }
Manage API keys programmatically.
# use rainy_sdk::RainyClient;
# async fn example() -> Result<(), Box<dyn std::error::Error>> {
# let client = RainyClient::with_api_key("dummy")?;
// List all API keys
let keys = client.list_api_keys().await?;
for key in keys {
println!("Key ID: {} - Active: {}", key.id, key.is_active);
}
// Create a new API key
let new_key = client.create_api_key("My new key", Some(30)).await?;
println!("Created key: {}", new_key.key);
// Delete the API key
client.delete_api_key(&new_key.id.to_string()).await?;
# Ok(())
# }
Explore the examples/ directory for comprehensive usage examples:
examples/basic_usage.rs): Complete walkthrough of all SDK features.examples/chat_completion.rs): Advanced chat completion patterns.examples/error_handling.rs): Demonstrates how to handle different error types.Run examples with:
# Set your API key
export RAINY_API_KEY="your-api-key-here"
# Run basic usage example
cargo run --example basic_usage
# Run chat completion example
cargo run --example chat_completion
API Key Management: This SDK utilizes the secrecy crate to handle the API key, ensuring it is securely stored in memory and zeroed out upon being dropped. However, it is still crucial to manage the RainyClient's lifecycle carefully within your application to minimize exposure.
Rate Limiting: The optional rate-limiting feature is intended as a client-side safeguard to prevent accidental overuse and to act as a "good citizen" towards the API. It is not a security mechanism and can be bypassed by a malicious actor. For robust abuse prevention, you must implement server-side monitoring, usage quotas, and API key management through your Enosis Labs dashboard.
TLS Configuration: The client is hardened to use modern, secure TLS settings (TLS 1.2+ via the rustls backend) and to only allow HTTPS connections, providing strong protection against network interception.
The SDK is built with a modular architecture:
src/
โโโ auth.rs # Authentication and API key management
โโโ client.rs # Main API client with request handling
โโโ cowork.rs # Tier-based feature gating and capabilities
โโโ endpoints/ # API endpoint implementations (internal)
โโโ error.rs # Comprehensive error handling
โโโ models.rs # Data structures and type definitions
โโโ retry.rs # Retry logic with exponential backoff
โโโ search.rs # Web search and content extraction
โโโ lib.rs # Public API and module exports
client.rs: Core RainyClient with async HTTP handling and response processingmodels.rs: Complete type system including ChatCompletionRequest, ThinkingConfig, EnhancedChatMessageauth.rs: Secure authentication with the secrecy crate for API key managementcowork.rs: Integration with Enosis Labs' tier system (Free/GoPlus/Plus/Pro/ProPlus)search.rs: Tavily-powered web search with content extraction capabilitiesendpoints/: Internal API endpoint implementations (chat, health, keys, usage, user)We welcome contributions! Please see our Contributing Guide for details on:
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
This SDK is developed by Enosis Labs and is not officially affiliated with any AI provider mentioned (OpenAI, Anthropic, Google, etc.). The Rainy API serves as an independent gateway service that provides unified access to multiple AI providers.
Made with โค๏ธ by Enosis Labs