| Crates.io | langsmith-rust |
| lib.rs | langsmith-rust |
| version | 0.1.3 |
| created_at | 2025-12-26 23:05:24.509931+00 |
| updated_at | 2025-12-28 21:27:11.84443+00 |
| description | Rust crate for manual tracing to LangSmith, providing similar ergonomics to the Python and TypeScript SDKs |
| homepage | |
| repository | https://github.com/teachmewow/langsmith-rust |
| max_upload_size | |
| id | 2006382 |
| size | 167,300 |
A production-ready Rust crate for manual tracing to LangSmith, providing similar ergonomics to the Python and TypeScript SDKs. Designed for building observable AI agent systems with LangGraph-like architectures.
.env supporttrace_node helperAdd to your Cargo.toml:
[dependencies]
langsmith-rust = "0.1.3"
Or from a git repository:
[dependencies]
langsmith-rust = { git = "https://github.com/your-org/langsmith-rust" }
Create a .env file in your project root:
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT=https://api.smith.langchain.com
LANGSMITH_API_KEY=<your-api-key>
LANGSMITH_PROJECT=<your-project-name>
LANGSMITH_TENANT_ID=<workspace-id> # Optional
use langsmith_rust;
// Initialize (loads .env automatically)
langsmith_rust::init();
use langsmith_rust::{Tracer, RunType};
use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
langsmith_rust::init();
// Create root tracer
let mut tracer = Tracer::new(
"Agent Pipeline",
RunType::Chain,
json!({"question": "What is Rust?"})
).with_thread_id("thread-123".to_string());
// Post initial run
tracer.post().await?;
// Create child run for LLM call
let mut llm_run = tracer.create_child(
"OpenAI Call",
RunType::Llm,
json!({"messages": [{"role": "user", "content": "What is Rust?"}]})
);
llm_run.post().await?;
// ... execute LLM call ...
let completion = "Rust is a systems programming language...";
// Update run with outputs
llm_run.end(json!({"completion": completion}));
llm_run.patch().await?;
// End parent run
tracer.end(json!({"answer": completion}));
tracer.patch().await?;
Ok(())
}
use langsmith_rust::{trace_node, RunType};
use serde_json::json;
async fn llm_node(messages: Vec<String>) -> langsmith_rust::Result<String> {
// Your node logic here
let response = call_openai(&messages).await?;
Ok(response)
}
// Wrap your node function with trace_node
let result = trace_node(
"llm_node",
RunType::Llm,
vec!["Hello".to_string(), "How are you?".to_string()],
llm_node
).await?;
How it works:
inputs and posts to LangSmithoutputs and patches the runThis crate follows SOLID principles and uses several design patterns:
See ARCHITECTURE.md for detailed architecture documentation.
This crate is designed to integrate seamlessly with LangGraph-style node execution:
use langsmith_rust::{trace_node, RunType, TracerFactory};
use serde_json::json;
// In your graph node implementation
pub struct GraphNode {
name: String,
run_type: RunType,
}
impl GraphNode {
pub async fn execute(&self, state: State) -> Result<State> {
trace_node(
&self.name,
self.run_type.clone(),
json!(state),
|state| async move {
// Your node logic
process_state(state).await
}
).await
}
}
See INTEGRATION.md for detailed integration guide.
GraphTrace + RunScope are designed for LangGraph-like executors where each node iteration is a child run:
use langsmith_rust::GraphTrace;
use serde_json::json;
// Start root run
let trace = GraphTrace::start_root(json!({"messages": []}), Some("thread-123".to_string())).await?;
// Each node iteration
let scope = trace.start_node_iteration("chatbot", json!({"messages": []})).await?;
// ... execute node ...
scope.end_ok(json!({"messages": []})).await?;
// Close root run
trace.end_root(json!({"finish_reason": "stop"})).await?;
TracerMain structure for creating and managing runs.
// Create a new tracer
let tracer = Tracer::new("Node Name", RunType::Llm, json!({"input": "..."}));
// Configure tracer
let tracer = tracer
.with_thread_id("thread-123".to_string())
.with_client(client);
// Create child run
let child = tracer.create_child("Child Node", RunType::Tool, json!({}));
// Post and patch
tracer.post().await?;
tracer.end(json!({"output": "..."}));
tracer.patch().await?;
TracerFactoryFactory for creating tracers with different configurations:
// Create root tracer
let root = TracerFactory::create_root("Root", RunType::Chain, json!({}));
// Create with thread context
let tracer = TracerFactory::create_with_thread(
"Node",
RunType::Llm,
json!({}),
"thread-123".to_string()
);
// Create for graph node
let tracer = TracerFactory::create_for_node(
"Node",
RunType::Llm,
json!({}),
Some(&parent_context)
);
trace_node(name, run_type, inputs, f) - Wrap async function with tracingtrace_node_sync(name, run_type, inputs, f) - Wrap sync function with tracingRunType::Chain - Chain execution (orchestrator)RunType::Llm - LLM callRunType::Tool - Tool executionRunType::Retriever - Retrieval operationRunType::Embedding - Embedding generationRunType::Prompt - Prompt executionRunType::Runnable - Generic runnableRunType::Custom(String) - Custom run typeSee the examples/ directory for complete examples:
test_llm_tracing.rs - Basic LLM tracing exampledecorator_example.rs - Using trace_node with multiple nodesobservable_graph.rs - Observable graph nodes with Observer patternRun examples with:
cargo run --example test_llm_tracing
Run all tests:
cargo test
Run specific test suites:
cargo test --test config_test
cargo test --test models_test
cargo test --test tracer_test
All tracing errors are logged to stderr but never break your application. If tracing fails, your code continues to execute normally. This ensures tracing is truly non-intrusive.
// Even if LangSmith is down, your code continues
let result = trace_node("node", RunType::Llm, input, my_function).await?;
// Your application continues normally
serde_json for fast serializationContributions are welcome! Please read our contributing guidelines before submitting PRs.
MIT