| Crates.io | llm-kit-togetherai |
| lib.rs | llm-kit-togetherai |
| version | 0.1.0 |
| created_at | 2026-01-18 20:01:22.332968+00 |
| updated_at | 2026-01-18 20:01:22.332968+00 |
| description | Together AI provider for LLM Kit |
| homepage | |
| repository | |
| max_upload_size | |
| id | 2052984 |
| size | 169,168 |
Together AI provider for LLM Kit - Complete integration with Together AI's extensive collection of open-source models.
Note: This provider uses the standardized builder pattern. See the Quick Start section for the recommended usage.
Add this to your Cargo.toml:
[dependencies]
llm-kit-togetherai = "0.1"
llm-kit-core = "0.1"
llm-kit-provider = "0.1"
tokio = { version = "1", features = ["full"] }
use llm_kit_togetherai::TogetherAIClient;
use llm_kit_provider::language_model::LanguageModel;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider using the client builder
let provider = TogetherAIClient::new()
.api_key("your-api-key") // Or use TOGETHER_AI_API_KEY env var
.build();
// Create a language model
let model = provider.chat_model("meta-llama/Llama-3.3-70B-Instruct-Turbo");
println!("Model: {}", model.model_id());
println!("Provider: {}", model.provider());
Ok(())
}
use llm_kit_togetherai::{TogetherAIProvider, TogetherAIProviderSettings};
use llm_kit_provider::language_model::LanguageModel;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create provider with settings
let provider = TogetherAIProvider::new(
TogetherAIProviderSettings::new()
.with_api_key("your-api-key")
);
let model = provider.chat_model("meta-llama/Llama-3.3-70B-Instruct-Turbo");
println!("Model: {}", model.model_id());
Ok(())
}
Set your Together AI API key as an environment variable:
export TOGETHER_AI_API_KEY=your-api-key
use llm_kit_togetherai::TogetherAIClient;
let provider = TogetherAIClient::new()
.api_key("your-api-key")
.base_url("https://api.together.xyz/v1")
.header("Custom-Header", "value")
.name("my-togetherai")
.build();
The TogetherAIClient builder supports:
.api_key(key) - Set the API key (overrides TOGETHER_AI_API_KEY environment variable).base_url(url) - Set custom base URL (default: https://api.together.xyz/v1).name(name) - Set provider name (optional).header(key, value) - Add a single custom header.headers(map) - Add multiple custom headers.load_api_key_from_env() - Explicitly load API key from environment variable.build() - Build the providerAll Together AI chat models are supported, including:
meta-llama/Llama-3.3-70B-Instruct-Turbo, meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo, meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo, meta-llama/Meta-Llama-3.1-405B-Instruct-TurboQwen/Qwen2.5-Coder-32B-Instruct, Qwen/Qwen2.5-7B-Instruct-Turbo, Qwen/Qwen2.5-72B-Instruct-Turbodeepseek-ai/DeepSeek-V3mistralai/Mistral-7B-Instruct-v0.3, mistralai/Mixtral-8x7B-Instruct-v0.1, mistralai/Mixtral-8x22B-Instruct-v0.1google/gemma-2-9b-it, google/gemma-2-27b-itWhereIsAI/UAE-Large-V1BAAI/bge-large-en-v1.5, BAAI/bge-base-en-v1.5sentence-transformers/msmarco-bert-base-dot-v5black-forest-labs/FLUX.1-schnell, black-forest-labs/FLUX.1-dev, black-forest-labs/FLUX.1.1-prostabilityai/stable-diffusion-xl-base-1.0, stabilityai/stable-diffusion-2-1Salesforce/Llama-Rank-v1mixedbread-ai/Mxbai-Rerank-Large-V2For a complete list of available models, see the Together AI Models documentation.
Together AI provider supports convenient chained model creation:
use llm_kit_togetherai::TogetherAIClient;
// Create model directly from builder
let model = TogetherAIClient::new()
.api_key("your-api-key")
.build()
.chat_model("meta-llama/Llama-3.3-70B-Instruct-Turbo");
Together AI supports multiple model types in a single provider:
use llm_kit_togetherai::TogetherAIClient;
let provider = TogetherAIClient::new()
.api_key("your-api-key")
.build();
// Chat models
let chat_model = provider.chat_model("meta-llama/Llama-3.3-70B-Instruct-Turbo");
// Embedding models
let embedding_model = provider.text_embedding_model("WhereIsAI/UAE-Large-V1");
// Image models
let image_model = provider.image_model("black-forest-labs/FLUX.1-schnell");
// Reranking models
let reranking_model = provider.reranking_model("Salesforce/Llama-Rank-v1");
See the examples/ directory for complete examples:
chat.rs - Basic chat completionstream.rs - Streaming responseschat_tool_calling.rs - Tool calling with function definitionsstream_tool_calling.rs - Streaming with tool callstext_embedding.rs - Text embeddings for semantic searchimage_generation.rs - Image generation with FLUX and Stable Diffusionreranking.rs - Document reranking for improved searchRun examples with:
export TOGETHER_AI_API_KEY=your-api-key
cargo run --example chat
cargo run --example stream
cargo run --example chat_tool_calling
cargo run --example stream_tool_calling
cargo run --example text_embedding
cargo run --example image_generation
cargo run --example reranking
Apache-2.0
Contributions are welcome! Please see the Contributing Guide for more details.