| Crates.io | language-model-token-expander |
| lib.rs | language-model-token-expander |
| version | 0.1.2 |
| created_at | 2025-03-31 22:33:25.969379+00 |
| updated_at | 2025-07-13 11:39:22.311843+00 |
| description | A crate for token expansion, leveraging batch workflows and language model APIs. |
| homepage | |
| repository | https://github.com/klebs6/klebs-general |
| max_upload_size | |
| id | 1614168 |
| size | 151,728 |
This crate provides a high-level, batch-oriented token expansion system. By integrating with the batch-mode-batch-workflow tooling and a language model client (e.g., OpenAI), it streamlines processing tokens in batch, generating requests, and reconciling outputs.
LanguageModelTokenExpander Struct
#[derive(LanguageModelBatchWorkflow)] to implement a comprehensive batch workflow.CreateLanguageModelRequestsAtAgentCoordinate trait to define how requests are formed.Modular Error Type
TokenExpanderError consolidates a range of possible error variants (e.g., file I/O, reconciliation errors) into one convenient enum.ComputeLanguageModelRequests Integration
use language_model_token_expander::*;
use batch_mode_token_expander::CreateLanguageModelRequestsAtAgentCoordinate;
use std::sync::Arc;
use agent_coordinate::AgentCoordinate;
#[tokio::main]
async fn main() -> Result<(), TokenExpanderError> {
// Provide an implementation for CreateLanguageModelRequestsAtAgentCoordinate
struct MyRequestCreator;
impl CreateLanguageModelRequestsAtAgentCoordinate for MyRequestCreator {
fn create_language_model_requests_at_agent_coordinate<X: IntoLanguageModelQueryString>(
&self,
model: &LanguageModelType,
coord: &AgentCoordinate,
inputs: &[X],
) -> Vec<LanguageModelBatchAPIRequest> {
// custom request creation
vec![]
}
}
let my_expander = LanguageModelTokenExpander::new(
"/path/to/batch_workspace",
Arc::new(MyRequestCreator),
AgentCoordinate::default(),
LanguageModelType::default(),
ExpectedContentType::Json,
).await?;
// Provide seeds (tokens) to expand
let seeds = vec![];
my_expander.plant_seed_and_wait(&seeds).await?;
Ok(())
}
In this example, LanguageModelTokenExpander automatically handles workspace management and organizes the batch flow from seed input to final JSON output. You need only define how to convert your tokens into LanguageModelBatchAPIRequest structures.