| Crates.io | models-dev |
| lib.rs | models-dev |
| version | 0.1.1 |
| created_at | 2025-09-23 18:02:58.191706+00 |
| updated_at | 2025-09-23 18:12:42.787898+00 |
| description | Simple Rust client for the models.dev API |
| homepage | https://github.com/verbalshadow/models.dev |
| repository | https://github.com/verbalshadow/models.dev |
| max_upload_size | |
| id | 1851869 |
| size | 444,593 |
A smart Rust client library for the models.dev API with intelligent caching capabilities.
models-dev is a high-performance Rust client for the models.dev API that provides comprehensive information about AI model providers and their available models. The library features intelligent caching with ETag-based conditional requests, ensuring optimal performance for repeated API calls while always serving fresh data when available.
Add this to your Cargo.toml:
[dependencies]
models-dev = "0.1.0"
The library supports different TLS backends:
# Default (native TLS)
models-dev = { version = "0.1.0" }
# Rustls TLS
models-dev = { version = "0.1.0", features = ["rustls-tls"] }
# Native TLS
models-dev = { version = "0.1.0", features = ["native-tls"] }
This library requires Rust 1.70+.
Here's a simple example to get you started:
use models_dev::ModelsDevClient;
#[tokio::main]
async fn main() -> Result<(), models_dev::ModelsDevError> {
// Create a client with default settings
let client = ModelsDevClient::new();
// Fetch providers (first call hits the API)
let response = client.fetch_providers().await?;
println!("Found {} providers", response.providers.len());
// Second call uses conditional request (much faster)
let response2 = client.fetch_providers().await?;
println!("Still {} providers", response2.providers.len());
Ok(())
}
The library automatically handles caching with conditional HTTP requests. When you call fetch_providers(), it:
use models_dev::ModelsDevClient;
use std::time::Instant;
#[tokio::main]
async fn main() -> Result<(), models_dev::ModelsDevError> {
let client = ModelsDevClient::new();
// First call - hits API
let start = Instant::now();
let response1 = client.fetch_providers().await?;
let duration1 = start.elapsed();
// Second call - uses conditional request
let start = Instant::now();
let response2 = client.fetch_providers().await?;
let duration2 = start.elapsed();
println!("First call: {:?}", duration1);
println!("Second call: {:?}", duration2);
if duration2 < duration1 {
let speedup = duration1.as_millis() as f64 / duration2.as_millis() as f64;
println!("Speedup: {:.2}x faster!", speedup);
}
Ok(())
}
You can manually manage the cache:
use models_dev::ModelsDevClient;
let client = ModelsDevClient::new();
// Get cache information
let cache_info = client.cache_info();
println!("Cache has metadata: {}", cache_info.has_metadata);
// Clear cache (forces fresh API request)
client.clear_cache()?;
// Fetch fresh data
let response = client.fetch_providers().await?;
Create a client with custom settings:
use models_dev::ModelsDevClient;
use std::time::Duration;
// Custom API base URL
let client = ModelsDevClient::with_base_url("https://custom.api.models.dev");
// The client uses a 30-second timeout by default
println!("Timeout: {:?}", client.timeout());
The library provides comprehensive error handling:
use models_dev::{ModelsDevClient, ModelsDevError};
#[tokio::main]
async fn main() {
let client = ModelsDevClient::new();
match client.fetch_providers().await {
Ok(response) => {
println!("Success: {} providers", response.providers.len());
}
Err(ModelsDevError::HttpError(e)) => {
eprintln!("HTTP error: {}", e);
}
Err(ModelsDevError::JsonError(e)) => {
eprintln!("JSON parsing error: {}", e);
}
Err(ModelsDevError::ApiError(msg)) => {
eprintln!("API error: {}", msg);
}
Err(ModelsDevError::Timeout) => {
eprintln!("Request timed out");
}
Err(ModelsDevError::CacheError(msg)) => {
eprintln!("Cache error: {}", msg);
}
Err(e) => {
eprintln!("Other error: {}", e);
}
}
}
The main client struct for interacting with the models.dev API.
new() -> SelfCreates a new client with default settings.
with_base_url(api_base_url: impl Into<String>) -> SelfCreates a client with a custom API base URL.
fetch_providers() -> Result<ModelsDevResponse, ModelsDevError>Fetches provider information with smart caching.
clear_cache() -> Result<(), ModelsDevError>Clears the cache metadata.
cache_info() -> CacheInfoReturns information about the current cache state.
api_base_url() -> &strReturns the API base URL.
timeout() -> DurationReturns the request timeout.
Top-level response containing provider information.
pub struct ModelsDevResponse {
pub providers: HashMap<String, Provider>,
}
Information about an AI model provider.
pub struct Provider {
pub id: String,
pub name: String,
pub npm: String,
pub env: Vec<String>,
pub doc: String,
pub api: Option<String>,
pub models: HashMap<String, Model>,
}
Information about a specific AI model.
pub struct Model {
pub id: String,
pub name: String,
pub attachment: bool,
pub reasoning: bool,
pub temperature: bool,
pub tool_call: bool,
pub knowledge: Option<String>,
pub release_date: Option<String>,
pub last_updated: Option<String>,
pub modalities: Modalities,
pub open_weights: bool,
pub cost: Option<ModelCost>,
pub limit: ModelLimit,
}
Comprehensive error enum for all possible failures:
HttpError(reqwest::Error) - HTTP request failuresJsonError(serde_json::Error) - JSON parsing errorsApiError(String) - API error responsesTimeout - Network timeoutInvalidUrl(String) - Invalid URL configurationCacheError(String) - Cache operation failuresThe library includes comprehensive examples:
cargo run --example basic_usage
Demonstrates basic client usage and caching benefits.
cargo run --example smart_caching_example
Shows advanced caching functionality and performance comparisons.
cargo run --example integration_example
Demonstrates integration patterns and error handling.
Run the test suite:
# Run all tests
cargo test
# Run unit tests only
cargo test --lib
# Run integration tests
cargo test --test integration
# Run with output
cargo test -- --nocapture
The library includes:
We welcome contributions! Togather input, make a new issue to discuss the feature or fix you plan to working on. Please follow these guidelines:
git clone https://github.com/your-username/models-devgit checkout -b feature-namecargo buildcargo testthiserrorResult<T, ModelsDevError> over generic errors? operator for early returns, avoid nested conditionalspub(crate) for implementation detailscargo testcargo fmtcargo clippy -- -D warningsFollow the conventional commit format:
type(scope): description
# Examples
feat(client): add custom timeout configuration
fix(caching): resolve etag comparison issue
docs(readme): update installation instructions
test(examples): add integration test coverage
This project is licensed under the MIT License - see the LICENSE file for details.
Initial Release
clear_cache(), cache_info())Cognitive load is what matters. This library is designed with the following principles:
fetch_providers() method works for both fresh and cached dataThe library reduces cognitive load by:
Built with โค๏ธ for the Rust community
Rust's type system should reduce cognitive load, not increase it. If you're fighting the compiler, redesign.