| Crates.io | llm-memory-graph |
| lib.rs | llm-memory-graph |
| version | 0.1.0 |
| created_at | 2025-11-07 06:01:14.593969+00 |
| updated_at | 2025-11-07 06:01:14.593969+00 |
| description | Graph-based context-tracking and prompt-lineage database for LLM systems |
| homepage | https://github.com/globalbusinessadvisors/llm-memory-graph |
| repository | https://github.com/globalbusinessadvisors/llm-memory-graph |
| max_upload_size | |
| id | 1921200 |
| size | 1,209,738 |
Graph-based context-tracking and prompt-lineage database for LLM systems.
llm-memory-graph provides a persistent, queryable graph database specifically designed for tracking LLM interactions, managing conversation contexts, and tracing prompt lineage through complex multi-agent systems.
PromptNode: Track prompts and their metadataCompletionNode: Store LLM responsesConversationNode: Organize multi-turn dialoguesToolInvocationNode: Track tool/function callsAgentNode: Multi-agent system coordinationDocumentNode, ContextNode, FeedbackNode, and moreAdd this to your Cargo.toml:
[dependencies]
llm-memory-graph = "0.1.0"
use llm_memory_graph::{MemoryGraph, NodeType, EdgeType, CreateNodeRequest};
use std::collections::HashMap;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Initialize the graph database
let graph = MemoryGraph::new("./data/memory_graph")?;
// Create a prompt node
let prompt_id = graph.create_node(CreateNodeRequest {
node_type: NodeType::Prompt,
content: "What is the capital of France?".to_string(),
metadata: HashMap::new(),
})?;
// Create a completion node
let completion_id = graph.create_node(CreateNodeRequest {
node_type: NodeType::Completion,
content: "The capital of France is Paris.".to_string(),
metadata: HashMap::new(),
})?;
// Link them with an edge
graph.create_edge(
prompt_id,
completion_id,
EdgeType::Generates,
HashMap::new(),
)?;
// Query the graph
let nodes = graph.get_neighbors(prompt_id, Some(EdgeType::Generates))?;
println!("Found {} completion nodes", nodes.len());
Ok(())
}
use llm_memory_graph::{AsyncMemoryGraph, QueryBuilder};
use futures::StreamExt;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let graph = AsyncMemoryGraph::new("./data/memory_graph").await?;
let query = QueryBuilder::new()
.node_type(NodeType::Prompt)
.limit(100)
.build();
let mut stream = graph.query_stream(query).await?;
while let Some(node) = stream.next().await {
println!("Node: {:?}", node?);
}
Ok(())
}
use llm_memory_graph::{PromptTemplate, TemplateVariable};
// Create a reusable prompt template
let template = PromptTemplate::new(
"summarization",
"Summarize the following text:\n\n{{text}}\n\nSummary:",
vec![TemplateVariable::new("text", "string", true)],
);
// Render with variables
let mut vars = HashMap::new();
vars.insert("text".to_string(), "Long text to summarize...".to_string());
let rendered = template.render(&vars)?;
The graph supports multiple specialized node types for different use cases:
Relationships between nodes are typed:
Powerful query interface with:
Edges can carry rich metadata:
let mut edge_metadata = HashMap::new();
edge_metadata.insert("model".to_string(), "gpt-4".to_string());
edge_metadata.insert("temperature".to_string(), "0.7".to_string());
edge_metadata.insert("tokens".to_string(), "150".to_string());
graph.create_edge_with_properties(
prompt_id,
completion_id,
EdgeType::Generates,
edge_metadata,
)?;
Built-in migration system for schema evolution:
use llm_memory_graph::migration::{MigrationEngine, Migration};
let mut engine = MigrationEngine::new(graph);
engine.add_migration(Migration::new(
"001",
"add_timestamps",
|graph| {
// Migration logic
Ok(())
},
))?;
engine.run_migrations()?;
Export metrics to Prometheus:
use llm_memory_graph::observatory::{Observatory, PrometheusExporter};
let observatory = Observatory::new();
let exporter = PrometheusExporter::new("localhost:9090")?;
observatory.add_exporter(exporter);
Built on proven technologies:
The repository includes comprehensive examples:
simple_chatbot.rs: Basic chatbot with conversation trackingasync_streaming_queries.rs: Async query patternsedge_properties.rs: Working with edge metadataprompt_templates.rs: Template system usagetool_invocations.rs: Tool call trackingobservatory_demo.rs: Observability integrationmigration_guide.rs: Schema migration patternsRun an example:
cargo run --example simple_chatbot
Optimized for production use:
Designed to integrate with the LLM DevOps ecosystem:
Contributions are welcome! Please ensure:
cargo testcargo fmtcargo clippyLicensed under either of:
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.