| Crates.io | intent-classifier |
| lib.rs | intent-classifier |
| version | 0.1.0 |
| created_at | 2025-07-08 12:38:00.54959+00 |
| updated_at | 2025-07-08 12:38:00.54959+00 |
| description | A flexible few-shot intent classification library for natural language processing |
| homepage | |
| repository | https://github.com/ciresnave/intent-classifier |
| max_upload_size | |
| id | 1742613 |
| size | 132,141 |
A flexible few-shot intent classification library for natural language processing in Rust. This library provides a simple API for classifying user intents from text using machine learning and rule-based approaches.
Add this to your Cargo.toml:
[dependencies]
intent-classifier = "0.1.0"
use intent_classifier::{IntentClassifier, TrainingExample, TrainingSource, IntentId};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a new classifier
let classifier = IntentClassifier::new().await?;
// Predict an intent
let prediction = classifier.predict_intent("merge these JSON files together").await?;
println!("Intent: {}, Confidence: {:.3}",
prediction.intent, prediction.confidence.value());
// Add custom training data
let example = TrainingExample {
text: "calculate the sum of these numbers".to_string(),
intent: IntentId::from("math_operation"),
confidence: 1.0,
source: TrainingSource::Programmatic,
};
classifier.add_training_example(example).await?;
// Get statistics
let stats = classifier.get_stats().await;
println!("Training examples: {}", stats.training_examples);
Ok(())
}
The library comes with bootstrap training data for common intent categories:
data_merge - Combining multiple data filesdata_split - Splitting large files into smaller onesdata_transform - Converting between data formatsdata_analyze - Analyzing datasets for patternsfile_read - Reading file contentsfile_write - Writing data to filesfile_convert - Converting file formatsfile_compare - Comparing filesnetwork_request - Making HTTP/API requestsnetwork_download - Downloading files from URLsnetwork_monitor - Monitoring network servicesextraction - Extracting information from documentsvalidation - Validating data against schemasgeneration - Generating reports or documentationclassification - Categorizing contentcode_analyze - Analyzing source codetext_process - Processing text documentsCustomize the classifier behavior:
use intent_classifier::{IntentClassifier, ClassifierConfig};
let config = ClassifierConfig {
feature_dimensions: 1000,
max_vocabulary_size: 15000,
min_confidence_threshold: 0.4,
retraining_threshold: 5,
debug_mode: true,
};
let classifier = IntentClassifier::with_config(config).await?;
let example = TrainingExample {
text: "solve this mathematical equation".to_string(),
intent: IntentId::from("math_operation"),
confidence: 1.0,
source: TrainingSource::Programmatic,
};
classifier.add_training_example(example).await?;
use intent_classifier::IntentFeedback;
let feedback = IntentFeedback {
text: "combine these files".to_string(),
predicted_intent: IntentId::from("file_write"),
actual_intent: IntentId::from("data_merge"),
satisfaction_score: 4.0,
notes: Some("Should be classified as data merge".to_string()),
timestamp: chrono::Utc::now(),
};
classifier.add_feedback(feedback).await?;
use intent_classifier::ClassificationRequest;
let request = ClassificationRequest {
text: "analyze this dataset".to_string(),
context: None,
include_alternatives: true,
include_reasoning: true,
};
let response = classifier.classify(request).await?;
println!("Processing time: {:.2}ms", response.processing_time_ms);
// Export training data
let exported = classifier.export_training_data().await?;
std::fs::write("training_data.json", exported)?;
// Import training data
let imported = std::fs::read_to_string("training_data.json")?;
let new_classifier = IntentClassifier::new().await?;
new_classifier.import_training_data(&imported).await?;
Route different types of tasks to specialized language models:
async fn route_task(classifier: &IntentClassifier, task: &str) -> String {
let prediction = classifier.predict_intent(task).await?;
match prediction.intent.0.as_str() {
"code_analyze" => "code-specialist-llm",
"data_analyze" => "data-science-llm",
"writing_creative" => "creative-writing-llm",
_ => "general-purpose-llm",
}.to_string()
}
async fn handle_user_message(classifier: &IntentClassifier, message: &str) -> Response {
let prediction = classifier.predict_intent(message).await?;
match prediction.intent.0.as_str() {
"greeting" => Response::Greeting,
"question" => Response::Answer,
"complaint" => Response::Support,
_ => Response::Default,
}
}
async fn classify_command(classifier: &IntentClassifier, command: &str) -> ToolAction {
let prediction = classifier.predict_intent(command).await?;
match prediction.intent.0.as_str() {
"file_read" => ToolAction::ReadFile,
"data_merge" => ToolAction::MergeData,
"network_request" => ToolAction::HttpRequest,
_ => ToolAction::Help,
}
}
Run the examples to see the library in action:
# Basic usage example
cargo run --example basic_usage
# Multi-LLM orchestration example
cargo run --example multi_llm_orchestration
The library is designed for high performance:
DashMap for thread-safe concurrent accessThe library consists of several key components:
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.