| Crates.io | llm-toolkit-expertise |
| lib.rs | llm-toolkit-expertise |
| version | 0.2.1 |
| created_at | 2025-11-20 16:27:55.881408+00 |
| updated_at | 2025-11-23 15:26:05.148735+00 |
| description | ⚠️ DEPRECATED: Use llm-toolkit::agent::expertise instead. This crate is archived and will not receive updates. |
| homepage | |
| repository | https://github.com/ynishi/llm-toolkit |
| max_upload_size | |
| id | 1942242 |
| size | 165,348 |
This crate has been archived and integrated into
llm-toolkitcore.Migration: Use
llm-toolkit::agent::expertiseinstead.All functionality is now available in the main
llm-toolkitcrate under theagentfeature. This crate will not receive further updates.
Agent as Code: Graph-based composition system for LLM agent capabilities.
Replace imports:
- use llm_toolkit_expertise::{Expertise, WeightedFragment, KnowledgeFragment};
- use llm_toolkit_expertise::{Priority, TaskHealth, ContextProfile};
+ use llm_toolkit::agent::expertise::{Expertise, WeightedFragment, KnowledgeFragment};
+ use llm_toolkit::context::{Priority, TaskHealth, ContextProfile};
All APIs remain identical - only import paths have changed.
llm-toolkit-expertise provides a flexible, composition-based approach to defining LLM agent expertise through weighted knowledge fragments. Instead of rigid inheritance hierarchies, expertise is built by composing independent fragments with priorities and contextual activation rules.
Add to your Cargo.toml:
[dependencies]
llm-toolkit-expertise = "0.1.0"
use llm_toolkit_expertise::{
Expertise, WeightedFragment, KnowledgeFragment, Priority, ContextProfile, TaskHealth,
};
// Create a code reviewer expertise
let expertise = Expertise::new("rust-reviewer", "1.0")
.with_tag("lang:rust")
.with_tag("role:reviewer")
.with_fragment(
WeightedFragment::new(KnowledgeFragment::Text(
"Always run cargo check before reviewing".to_string()
))
.with_priority(Priority::Critical)
)
.with_fragment(
WeightedFragment::new(KnowledgeFragment::Logic {
instruction: "Check for security issues".to_string(),
steps: vec![
"Scan for unsafe code".to_string(),
"Verify input validation".to_string(),
],
})
.with_priority(Priority::High)
.with_context(ContextProfile::Conditional {
task_types: vec!["security-review".to_string()],
user_states: vec![],
task_health: Some(TaskHealth::AtRisk),
})
);
// Generate prompt
println!("{}", expertise.to_prompt());
// Generate visualizations
println!("{}", expertise.to_tree());
println!("{}", expertise.to_mermaid());
Control how strongly knowledge should be enforced:
WeightedFragment::new(fragment)
.with_priority(Priority::Critical)
Fragments can be conditionally activated based on:
"debug", "security-review", "refactor", etc."beginner", "expert", "confused", etc.OnTrack, AtRisk, OffTrackContextProfile::Conditional {
task_types: vec!["debug".to_string()],
user_states: vec!["beginner".to_string()],
task_health: Some(TaskHealth::AtRisk),
}
Five types of knowledge representation:
// Logic fragment
KnowledgeFragment::Logic {
instruction: "Systematic debugging".to_string(),
steps: vec!["Reproduce", "Isolate", "Fix", "Verify"].iter().map(|s| s.to_string()).collect(),
}
// Guideline with anchoring
KnowledgeFragment::Guideline {
rule: "Prefer explicit error handling".to_string(),
anchors: vec![Anchor {
context: "Parsing user input".to_string(),
positive: "parse().map_err(|e| Error::Parse(e))?".to_string(),
negative: "parse().unwrap()".to_string(),
reason: "Unwrap can panic on bad input".to_string(),
}],
}
Generate multiple visualization formats:
Tree View:
let tree = expertise.to_tree();
// Output:
// Expertise: rust-reviewer (v1.0)
// ├─ Tags: lang:rust, role:reviewer
// └─ Content:
// ├─ [CRITICAL] Text: Always run cargo check...
// └─ [HIGH] Logic: Check for security issues
// └─ Health: ⚠️ At Risk
Mermaid Graph:
let mermaid = expertise.to_mermaid();
// Generates Mermaid syntax with color-coded priority nodes
Enable the integration feature to use ToPrompt trait:
[dependencies]
llm-toolkit-expertise = { version = "0.1.0", features = ["integration"] }
use llm_toolkit::ToPrompt;
let expertise = Expertise::new("test", "1.0")
.with_fragment(/* ... */);
let prompt_part = expertise.to_prompt()?;
Dynamically filter and render expertise based on runtime context:
use llm_toolkit_expertise::{
Expertise, WeightedFragment, KnowledgeFragment,
RenderContext, ContextualPrompt, Priority, ContextProfile, TaskHealth,
};
// Create expertise with conditional fragments
let expertise = Expertise::new("rust-tutor", "1.0")
.with_fragment(
WeightedFragment::new(KnowledgeFragment::Text(
"You are a Rust programming tutor".to_string()
))
.with_priority(Priority::High)
.with_context(ContextProfile::Always)
)
.with_fragment(
WeightedFragment::new(KnowledgeFragment::Text(
"Provide detailed explanations with examples".to_string()
))
.with_context(ContextProfile::Conditional {
task_types: vec![],
user_states: vec!["beginner".to_string()],
task_health: None,
})
)
.with_fragment(
WeightedFragment::new(KnowledgeFragment::Text(
"Focus on advanced patterns and optimizations".to_string()
))
.with_context(ContextProfile::Conditional {
task_types: vec![],
user_states: vec!["expert".to_string()],
task_health: None,
})
);
// Method 1: Direct rendering with context
let beginner_context = RenderContext::new().with_user_state("beginner");
let beginner_prompt = expertise.to_prompt_with_render_context(&beginner_context);
// Contains: base fragment + beginner-specific guidance
// Method 2: ContextualPrompt wrapper (for DTO integration)
let expert_prompt = ContextualPrompt::from_expertise(&expertise, RenderContext::new())
.with_user_state("expert")
.to_prompt();
// Contains: base fragment + expert-specific guidance
// Method 3: DTO pattern integration
#[derive(Serialize, ToPrompt)]
#[prompt(template = "# Expertise\n{{expertise}}\n\n# Task\n{{task}}")]
struct AgentRequest {
expertise: String, // ContextualPrompt.to_prompt() result
task: String,
}
let request = AgentRequest {
expertise: ContextualPrompt::from_expertise(
&expertise,
RenderContext::new().with_user_state("beginner")
).to_prompt(),
task: "Explain ownership and borrowing".to_string(),
};
let final_prompt = request.to_prompt();
Key Features:
ContextualPrompt implements to_prompt() for seamless template usageto_prompt() still works (uses empty context)Generate JSON Schema for validation and tooling:
use llm_toolkit_expertise::{dump_expertise_schema, save_expertise_schema};
// Get schema as JSON
let schema = dump_expertise_schema();
println!("{}", serde_json::to_string_pretty(&schema)?);
// Save to file
save_expertise_schema("expertise-schema.json")?;
The crate includes several examples:
# Basic expertise creation and usage
cargo run --example basic_expertise
# Generate JSON Schema
cargo run --example generate_schema
# Context-aware prompt generation
cargo run --example prompt_generation
Unlike traditional inheritance-based systems, llm-toolkit-expertise uses graph composition:
The TaskHealth enum enables "gear shifting" based on task status:
This mirrors how senior engineers adjust their approach based on project health.
RenderContext for runtime context managementContextualPrompt wrapper for DTO integrationto_prompt()Contributions welcome! This is an early-stage project exploring new patterns for agent capability composition.
MIT License - see LICENSE for details.