| Crates.io | tanukie |
| lib.rs | tanukie |
| version | 0.1.0 |
| created_at | 2026-01-08 02:01:56.318974+00 |
| updated_at | 2026-01-08 02:01:56.318974+00 |
| description | 🦀 Lightweight, blazing-fast LLM client |
| homepage | |
| repository | https://github.com/chonkie-inc/tanukie |
| max_upload_size | |
| id | 2029400 |
| size | 690,456 |
Tanukie is a unified LLM client with:
Add to your Cargo.toml:
[dependencies]
tanukie = "0.1"
tokio = { version = "1", features = ["full"] }
use tanukie::{Client, messages};
#[tokio::main]
async fn main() -> tanukie::Result<()> {
let client = Client::new();
let msgs = messages![
system: "You are a helpful assistant.",
user: "What is the capital of France?",
];
let response = client.agenerate("gpt-4o-mini", msgs).await?;
println!("{}", response.text);
Ok(())
}
use tanukie::{Client, messages, options};
let response = client.agenerate_with(
"llama-3.1-8b-instant", // Auto-detects Groq
messages![user: "Write a haiku about Rust."],
options![
temperature: 0.9,
max_tokens: 100u32,
],
).await?;
// Blocking/sync version
let response = client.generate("gpt-4o-mini", msgs)?;
| Provider | Status | Models |
|---|---|---|
| OpenAI | ✅ | gpt-4o, gpt-4o-mini, o1, etc. |
| Groq | ✅ | llama-3.1-8b, llama-3.3-70b, qwen3-32b, kimi-k2, etc. |
| Anthropic | 🔜 | Coming soon |
| 🔜 | Coming soon |
Set your API keys:
export OPENAI_API_KEY="sk-..."
export GROQ_API_KEY="gsk_..."
Can't find your favorite model or provider? Open an issue and we'll add it!
If you found this helpful, consider giving it a ⭐!
made with ❤️ by chonkie, inc.