| Crates.io | inklings |
| lib.rs | inklings |
| version | 0.1.0 |
| created_at | 2024-12-28 00:16:27.746237+00 |
| updated_at | 2024-12-28 00:16:27.746237+00 |
| description | A unified Rust API for various Large Language Model (LLM) providers |
| homepage | |
| repository | https://github.com/Octoponder/Inklings |
| max_upload_size | |
| id | 1496757 |
| size | 70,981 |
A unified Rust API for various Large Language Model (LLM) providers. Currently supports OpenAI and Anthropic APIs with a consistent interface. Support is planned for all common LLM providers.
The goal of this library is to make it as easy as possible to use multiple different LLM providers while being very easy get started with. It is supposed to be easy to use on all platforms and with all common programming languages. For this reason there will be thin language bindings for both Python and JavaScript to make
Add this to your Cargo.toml:
[dependencies]
inklings-lib = "0.1.0"
The library provides two main ways to interact with LLMs: simple completions and chat-based interactions.
Use complete() for quick, single-prompt interactions:
use inklings_lib::{Client, provider::OpenAIProvider};
#[tokio::main]
async fn main() {
// Create a provider (OpenAI in this example)
let provider = OpenAIProvider::new(
std::env::var("OPENAI_API_KEY").expect("OPENAI_API_KEY not set"),
None, // Uses default model (gpt-4o-mini)
);
let client = Client::new(Box::new(provider));
let response = client.complete("Tell me a joke").await.unwrap();
println!("Response: {}", response);
}
Use chat() when you need more control over the conversation flow, including system prompts and message history:
use inklings_lib::{Client, provider::OpenAIProvider, types::{Message, Role}};
#[tokio::main]
async fn main() {
let provider = OpenAIProvider::new(
std::env::var("OPENAI_API_KEY").expect("OPENAI_API_KEY not set"),
None,
);
let client = Client::new(Box::new(provider));
let messages = vec![
Message {
role: Role::System,
content: "You are a helpful assistant who speaks like Shakespeare.".to_string(),
},
Message {
role: Role::User,
content: "Tell me a joke".to_string(),
},
];
let response = client.chat(messages).await.unwrap();
println!("Response: {}", response);
}
The chat interface gives you more flexibility by allowing you to:
The repository includes a simple CLI example demonstrating the library usage:
# Set your API keys
export OPENAI_API_KEY=your_openai_key
export ANTHROPIC_API_KEY=your_anthropic_key
# Run with a custom prompt
cargo run -p inklings-cli -- "What is the meaning of life? Answer briefly"
This will query all available LLM providers and show their responses.
use futures::StreamExt;
use inklings_lib::{Client, provider::OpenAIProvider, types::Message};
#[tokio::main]
async fn main() {
let provider = OpenAIProvider::new(
std::env::var("OPENAI_API_KEY").expect("OPENAI_API_KEY not set"),
None,
);
let client = Client::new(Box::new(provider));
let messages = vec![Message {
role: Role::User,
content: "Tell me a story.".to_string(),
}];
let mut stream = client.stream_chat(messages).await.unwrap();
while let Some(Ok(chunk)) = stream.next().await {
print!("{}", chunk);
}
}
use inklings_lib::{
Client,
provider::OpenAIProvider,
types::{Message, Role},
};
#[tokio::main]
async fn main() {
let provider = OpenAIProvider::new(
std::env::var("OPENAI_API_KEY").expect("OPENAI_API_KEY not set"),
None,
);
let client = Client::new(Box::new(provider));
let messages = vec![
Message {
role: Role::System,
content: "You are a helpful assistant.".to_string(),
},
Message {
role: Role::User,
content: "What's the weather like?".to_string(),
},
];
let response = client.chat(messages).await.unwrap();
println!("Response: {}", response);
}
// OpenAI
let openai = OpenAIProvider::new(api_key, Some("gpt-4o".to_string()));
// Anthropic
let anthropic = AnthropicProvider::new(api_key, Some("claude-3-5-sonnet-20241022".to_string()));
// Mock (for testing)
let mock = MockProvider::new("Mocked response".to_string());
The library includes several types of tests:
Run with:
cargo test
These tests are marked with #[ignore] and require API keys:
# Run ignored tests (requires API keys)
cargo test -- --ignored
# Run all tests including ignored ones
cargo test -- --include-ignored
The MockProvider allows testing without real API calls:
let provider = MockProvider::new("Expected response".to_string());
// or
let provider = MockProvider::with_error("Error message".to_string());