| Crates.io | oxify |
| lib.rs | oxify |
| version | 0.1.0 |
| created_at | 2025-12-06 04:31:19.899351+00 |
| updated_at | 2026-01-19 06:36:08.502092+00 |
| description | OxiFY - LLM Workflow Orchestration Platform with DAG-based pipelines |
| homepage | |
| repository | https://github.com/cool-japan/oxify |
| max_upload_size | |
| id | 1969613 |
| size | 182,302 |
OxiFY is a graph-based LLM workflow orchestration platform built in Rust, designed to compose complex AI applications using directed acyclic graphs (DAGs). This meta-crate provides unified access to all OxiFY components.
Add OxiFY to your Cargo.toml:
[dependencies]
oxify = "0.1"
tokio = { version = "1", features = ["full"] }
The prelude provides convenient access to commonly used types:
use oxify::prelude::*;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Build a workflow
let workflow = WorkflowBuilder::new("simple-chat")
.description("A simple chat workflow")
.build()?;
// Create an LLM node configuration
let llm_config = LlmConfig {
provider: "openai".to_string(),
model: "gpt-4".to_string(),
system_prompt: Some("You are a helpful assistant.".to_string()),
prompt_template: "{{user_input}}".to_string(),
temperature: Some(0.7),
max_tokens: Some(1000),
extra_params: serde_json::Value::Null,
};
Ok(())
}
You can also access individual modules directly:
use oxify::model::{Workflow, Node, NodeKind, Edge};
use oxify::engine::{Engine, ExecutionConfig};
use oxify::vector::{HnswIndex, HnswConfig, DistanceMetric};
use oxify::connect_llm::{LlmRequest, LlmResponse};
This meta-crate re-exports all OxiFY library crates:
| Module | Crate | Description |
|---|---|---|
model |
oxify-model |
Domain models for workflows, nodes, edges, and execution |
vector |
oxify-vector |
High-performance vector search with HNSW indexing |
authn |
oxify-authn |
Authentication (OAuth2, API keys, JWT tokens) |
authz |
oxify-authz |
ReBAC authorization (Zanzibar-style) |
server |
oxify-server |
Axum-based HTTP server infrastructure |
mcp |
oxify-mcp |
Model Context Protocol implementation |
connect_llm |
oxify-connect-llm |
LLM provider integrations (OpenAI, Anthropic, Ollama) |
connect_vector |
oxify-connect-vector |
Vector database integrations (Qdrant) |
connect_vision |
oxify-connect-vision |
Vision/OCR integrations |
storage |
oxify-storage |
Persistent storage layer |
engine |
oxify-engine |
Workflow execution engine |
+-------------------------------------------------------------+
| OxiFY Platform |
+-------------------------------------------------------------+
| API Layer (oxify-server) |
| +-> Authentication (oxify-authn) |
| +-> Authorization (oxify-authz) |
| +-> Middleware (CORS, logging, compression) |
+-------------------------------------------------------------+
| Workflow Engine (oxify-engine) |
| +-> DAG Execution |
| +-> Node Processors (LLM, Vision, Retriever, Code) |
| +-> Plugin System |
+-------------------------------------------------------------+
| Connector Layer |
| +-> LLM Clients (oxify-connect-llm) |
| +-> Vision/OCR (oxify-connect-vision) |
| +-> Vector DB (oxify-connect-vector) |
| +-> Vector Search (oxify-vector) |
+-------------------------------------------------------------+
use oxify::model::{Workflow, Node, NodeKind, Edge, LlmConfig};
fn create_chat_workflow() -> Workflow {
let mut workflow = Workflow::new("chat-bot".to_string());
// Create nodes
let start = Node::new("Start".to_string(), NodeKind::Start);
let llm = Node::new("LLM".to_string(), NodeKind::Llm(LlmConfig {
provider: "openai".to_string(),
model: "gpt-4".to_string(),
system_prompt: Some("You are helpful.".to_string()),
prompt_template: "{{input}}".to_string(),
temperature: Some(0.7),
max_tokens: Some(1000),
extra_params: serde_json::Value::Null,
}));
let end = Node::new("End".to_string(), NodeKind::End);
let start_id = start.id;
let llm_id = llm.id;
let end_id = end.id;
workflow.add_node(start);
workflow.add_node(llm);
workflow.add_node(end);
workflow.add_edge(Edge::new(start_id, llm_id));
workflow.add_edge(Edge::new(llm_id, end_id));
workflow.validate().expect("Invalid workflow");
workflow
}
use oxify::vector::{HnswIndex, HnswConfig, DistanceMetric, SearchResult};
fn vector_search_example() {
// Create HNSW index
let config = HnswConfig {
m: 16,
ef_construction: 200,
ef_search: 50,
distance_metric: DistanceMetric::Cosine,
..Default::default()
};
let mut index = HnswIndex::new(384, config);
// Add vectors
let vectors = vec![
vec![0.1, 0.2, 0.3], // ... 384 dimensions
vec![0.4, 0.5, 0.6],
];
for (i, vec) in vectors.iter().enumerate() {
index.insert(i as u64, vec.clone());
}
// Search
let query = vec![0.15, 0.25, 0.35]; // ... 384 dimensions
let results = index.search(&query, 10);
}
use oxify::connect_llm::{LlmRequest, LlmResponse, OpenAIProvider, LlmProvider};
async fn llm_example() -> Result<(), Box<dyn std::error::Error>> {
let provider = OpenAIProvider::new("your-api-key".to_string());
let request = LlmRequest {
model: "gpt-4".to_string(),
messages: vec![
// ... messages
],
temperature: Some(0.7),
max_tokens: Some(1000),
..Default::default()
};
let response = provider.complete(&request).await?;
println!("Response: {}", response.content);
Ok(())
}
OxiFY supports 16+ workflow node types:
| Category | Node Types |
|---|---|
| Core | Start, End |
| LLM | GPT-3.5/4, Claude 3/3.5, Ollama |
| Vector | Qdrant, In-memory with hybrid search |
| Vision | Tesseract, Surya, PaddleOCR |
| Control | IfElse, Switch, Conditional |
| Loops | ForEach, While, Repeat |
| Error Handling | Try-Catch-Finally |
| Advanced | Sub-workflow, Code execution, HTTP Tool |
If you only need specific functionality, you can depend on individual crates:
# Just the workflow model
oxify-model = "0.1"
# Just vector search
oxify-vector = "0.1"
# Just LLM connections
oxify-connect-llm = "0.1"
The following binary applications are available separately:
Version 0.1.0 - Production-ready!
OxiFY is part of the COOLJAPAN ecosystem:
Apache-2.0 - See LICENSE file for details.
COOLJAPAN OU (Team Kitasan)