vex-llm

Crates.iovex-llm
lib.rsvex-llm
version0.1.4
created_at2025-12-14 21:53:30.752806+00
updated_at2025-12-20 03:46:51.33019+00
descriptionLLM provider integrations for VEX
homepage
repositoryhttps://github.com/provnai/vex
max_upload_size
id1985158
size262,432
0xQ (bulltickr)

documentation

README

vex-llm

LLM provider integrations for the VEX Protocol.

Supported Providers

  • OpenAI - GPT-4, GPT-3.5, etc.
  • Ollama - Local LLM inference
  • DeepSeek - DeepSeek models
  • Mistral - Mistral AI models
  • Mock - Testing provider

Installation

[dependencies]
vex-llm = "0.1"

# With OpenAI support
vex-llm = { version = "0.1", features = ["openai"] }

Quick Start

use vex_llm::{LlmProvider, OllamaProvider};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let provider = OllamaProvider::new("http://localhost:11434");
    let response = provider.complete("Hello, world!").await?;
    println!("{}", response);
    Ok(())
}

License

MIT License - see LICENSE for details.

Commit count: 0

cargo fmt