| Crates.io | llm-toolkit |
| lib.rs | llm-toolkit |
| version | 0.6.1 |
| created_at | 2025-09-16 16:58:48.505985+00 |
| updated_at | 2025-09-25 12:17:46.505907+00 |
| description | A low-level, unopinionated Rust toolkit for the LLM last mile problem. |
| homepage | |
| repository | https://github.com/ynishi/llm-toolkit |
| max_upload_size | |
| id | 1842056 |
| size | 116,040 |
Basic llm tools for rust
High-level LLM frameworks like LangChain, while powerful, can be problematic in Rust. Their heavy abstractions and complex type systems often conflict with Rust's strengths, imposing significant constraints and learning curves on developers.
There is a clear need for a different kind of tool: a low-level, unopinionated, and minimalist toolkit that provides robust "last mile" utilities for LLM integration, much like how candle provides core building blocks for ML without dictating the entire application architecture.
This document proposes the creation of llm-toolkit, a new library crate designed to be the professional's choice for building reliable, high-performance LLM-powered applications in Rust.
Minimalist & Unopinionated:
The toolkit will NOT impose any specific application architecture. Developers are free to design their own UseCases and Services. llm-toolkit simply provides a set of sharp, reliable "tools" to be called when needed.
Focused on the "Last Mile Problem": The toolkit focuses on solving the most common and frustrating problems that occur at the boundary between a strongly-typed Rust application and the unstructured, often unpredictable string-based responses from LLM APIs.
Minimal Dependencies:
The toolkit will have minimal dependencies (primarily serde and minijinja) to ensure it can be added to any Rust project with negligible overhead and maximum compatibility.
| Feature Area | Description | Key Components | Status |
|---|---|---|---|
| Content Extraction | Safely extracting structured data (like JSON) from unstructured LLM responses. | extract module (FlexibleExtractor, extract_json) |
Implemented |
| Prompt Generation | Building complex prompts from Rust data structures with a powerful templating engine. | prompt! macro, #[derive(ToPrompt)], #[derive(ToPromptSet)] |
Implemented |
| Multi-Target Prompts | Generate multiple prompt formats from a single data structure for different contexts. | ToPromptSet trait, #[prompt_for(...)] attributes |
Implemented |
| Intent Extraction | Extracting structured intents (e.g., enums) from LLM responses. | intent module (IntentExtractor, PromptBasedExtractor) |
Implemented |
| Resilient Deserialization | Deserializing LLM responses into Rust types, handling schema variations. | (Planned) | Planned |
llm-toolkit offers three powerful and convenient ways to generate prompts, powered by the minijinja templating engine.
prompt! macroFor quick prototyping and flexible prompt creation, the prompt! macro provides a println!-like experience. You can pass any serde::Serialize-able data as context.
use llm_toolkit::prompt::prompt;
use serde::Serialize;
#[derive(Serialize)]
struct User {
name: &'static str,
role: &'static str,
}
let user = User { name: "Mai", role: "UX Engineer" };
let task = "designing a new macro";
let p = prompt!(
"User {{user.name}} ({{user.role}}) is currently {{task}}.",
user = user,
task = task
).unwrap();
assert_eq!(p, "User Mai (UX Engineer) is currently designing a new macro.");
#[derive(ToPrompt)]For core application logic, you can derive the ToPrompt trait on your structs to generate prompts in a type-safe way.
Setup:
First, enable the derive feature in your Cargo.toml:
[dependencies]
llm-toolkit = { version = "0.1.0", features = ["derive"] }
serde = { version = "1.0", features = ["derive"] }
Usage:
Then, use the #[derive(ToPrompt)] and #[prompt(...)] attributes on your struct. The struct must also derive serde::Serialize.
use llm_toolkit::ToPrompt;
use serde::Serialize;
#[derive(ToPrompt, Serialize)]
#[prompt(template = "USER PROFILE:\nName: {{name}}\nRole: {{role}}")]
struct UserProfile {
name: &'static str,
role: &'static str,
}
let user = UserProfile {
name: "Yui",
role: "World-Class Pro Engineer",
};
let p = user.to_prompt();
// The following would be printed:
// USER PROFILE:
// Name: Yui
// Role: World-Class Pro Engineer
If you omit the #[prompt(template = "...")] attribute on a struct, ToPrompt will automatically generate a key-value representation of the struct's fields. You can control this output with field-level attributes:
| Attribute | Description |
|---|---|
#[prompt(rename = "new_name")] |
Overrides the key with "new_name". |
#[prompt(skip)] |
Excludes the field from the output. |
#[prompt(format_with = "path::to::func")] |
Uses a custom function to format the field's value. |
The key for each field is determined with the following priority:
#[prompt(rename = "...")] attribute./// ...) on the field.Comprehensive Example:
use llm_toolkit::ToPrompt;
use llm_toolkit_macros::ToPrompt; // Make sure to import the derive macro
use serde::Serialize;
// A custom formatting function
fn format_id(id: &u64) -> String {
format!("user-{}", id)
}
#[derive(ToPrompt, Serialize)]
struct AdvancedUser {
/// The user's unique identifier
id: u64,
#[prompt(rename = "full_name")]
name: String,
// This field will not be included in the prompt
#[prompt(skip)]
internal_hash: String,
// This field will use a custom formatting function for its value
#[prompt(format_with = "format_id")]
formatted_id: u64,
}
let user = AdvancedUser {
id: 123,
name: "Mai".to_string(),
internal_hash: "abcdef".to_string(),
formatted_id: 123,
};
let p = user.to_prompt();
// The following would be generated:
// The user's unique identifier: 123
// full_name: Mai
// formatted_id: user-123
When using raw string literals (r#"..."#) for your templates, be aware of a potential parsing issue if your template content includes the # character (e.g., in a hex color code like "#FFFFFF").
The macro parser can sometimes get confused by the inner #. To avoid this, you can use a different number of # symbols for the raw string delimiter.
Problematic Example:
// This might fail to parse correctly
#[prompt(template = r#"{"color": "#FFFFFF"}"#)]
struct Color { /* ... */ }
Solution:
// Use r##"..."## to avoid ambiguity
#[prompt(template = r##"{"color": "#FFFFFF"}"##)]
struct Color { /* ... */ }
#[derive(ToPrompt)]For enums, the ToPrompt derive macro provides flexible ways to generate prompts that describe your enum variants for LLM consumption. You can use doc comments, custom descriptions, or exclude variants entirely.
By default, the macro extracts documentation from Rust doc comments (///) on both the enum and its variants:
use llm_toolkit::ToPrompt;
/// Represents different user intents for a chatbot
#[derive(ToPrompt)]
pub enum BasicIntent {
/// User wants to greet or say hello
Greeting,
/// User is asking for help or assistance
Help,
}
The ToPrompt derive macro supports powerful attribute-based controls for fine-tuning the generated prompts:
#[prompt("...")] - Provide a custom description that overrides the doc comment#[prompt(skip)] - Exclude a variant from the prompt entirely (useful for internal-only variants)Here's a comprehensive example showcasing all features:
use llm_toolkit::ToPrompt;
/// Represents different actions a user can take in the system
#[derive(ToPrompt)]
pub enum UserAction {
/// User wants to create a new document
CreateDocument,
/// User is searching for existing content
Search { query: String },
#[prompt("Custom: User is updating their profile settings and preferences")]
UpdateProfile,
#[prompt(skip)]
InternalDebugAction,
DeleteItem,
}
let action = UserAction::CreateDocument;
let p = action.to_prompt();
// The following would be printed:
// UserAction: Represents different actions a user can take in the system
//
// Possible values:
// - CreateDocument: User wants to create a new document
// - Search: User is searching for existing content
// - UpdateProfile: Custom: User is updating their profile settings and preferences
// - DeleteItem
Note how in the output:
CreateDocument and Search use their doc commentsUpdateProfile uses the custom description from #[prompt("...")]InternalDebugAction is completely excluded due to #[prompt(skip)]DeleteItem appears with just its name since it has no documentation#[derive(ToPromptSet)]For applications that need to generate different prompt formats from the same data structure for various contexts (e.g., human-readable vs. machine-parsable, or different LLM models), the ToPromptSet derive macro enables powerful multi-target prompt generation.
use llm_toolkit::ToPromptSet;
use serde::Serialize;
#[derive(ToPromptSet, Serialize)]
#[prompt_for(name = "Visual", template = "## {{title}}\n\n> {{description}}")]
struct Task {
title: String,
description: String,
#[prompt_for(name = "Agent")]
priority: u8,
#[prompt_for(name = "Agent", rename = "internal_id")]
id: u64,
#[prompt_for(skip)]
is_dirty: bool,
}
let task = Task {
title: "Implement feature".to_string(),
description: "Add new functionality".to_string(),
priority: 1,
id: 42,
is_dirty: false,
};
// Generate visual-friendly prompt using template
let visual_prompt = task.to_prompt_for("Visual")?;
// Output: "## Implement feature\n\n> Add new functionality"
// Generate agent-friendly prompt with key-value format
let agent_prompt = task.to_prompt_for("Agent")?;
// Output: "title: Implement feature\ndescription: Add new functionality\npriority: 1\ninternal_id: 42"
Custom Formatting Functions:
fn format_priority(priority: &u8) -> String {
match priority {
1 => "Low".to_string(),
2 => "Medium".to_string(),
3 => "High".to_string(),
_ => "Unknown".to_string(),
}
}
#[derive(ToPromptSet, Serialize)]
struct FormattedTask {
title: String,
#[prompt_for(name = "Human", format_with = "format_priority")]
priority: u8,
}
Multimodal Support:
use llm_toolkit::prompt::{PromptPart, ToPrompt};
#[derive(ToPromptSet, Serialize)]
#[prompt_for(name = "Multimodal", template = "Analyzing image: {{caption}}")]
struct ImageTask {
caption: String,
#[prompt_for(name = "Multimodal", image)]
image: ImageData,
}
// Generate multimodal prompt with both text and image
let parts = task.to_prompt_parts_for("Multimodal")?;
// Returns Vec<PromptPart> with both Image and Text parts
| Attribute | Description | Example |
|---|---|---|
#[prompt_for(name = "TargetName")] |
Include field in specific target | #[prompt_for(name = "Debug")] |
#[prompt_for(name = "Target", template = "...")] |
Use template for target (struct-level) | #[prompt_for(name = "Visual", template = "{{title}}")] |
#[prompt_for(name = "Target", rename = "new_name")] |
Rename field for specific target | #[prompt_for(name = "API", rename = "task_id")] |
#[prompt_for(name = "Target", format_with = "func")] |
Custom formatting function | #[prompt_for(name = "Human", format_with = "format_date")] |
#[prompt_for(name = "Target", image)] |
Mark field as image content | #[prompt_for(name = "Vision", image)] |
#[prompt_for(skip)] |
Exclude field from all targets | #[prompt_for(skip)] |
When to use ToPromptSet vs ToPrompt:
ToPrompt: Single, consistent prompt format across your applicationToPromptSet: Multiple prompt formats needed for different contexts (human vs. machine, different LLM models, etc.)A planned feature is to introduce a unified interface for handling image inputs across different LLM providers. This would abstract away the complexities of dealing with various data formats (e.g., Base64, URLs, local file paths) and model-specific requirements, providing a simple and consistent API for multimodal applications.