| Crates.io | product-os-models |
| lib.rs | product-os-models |
| version | 0.0.4 |
| created_at | 2025-12-20 06:29:54.077769+00 |
| updated_at | 2025-12-30 02:00:19.803229+00 |
| description | Product OS : Models provides direct implementation using Candle or indirect implementation using Ollama of different models, focused on LLMs |
| homepage | |
| repository | |
| max_upload_size | |
| id | 1996068 |
| size | 249,930 |
Product OS : Models provides direct implementation using Candle or indirect implementation using Ollama, vLLM, SGLang, and MAX (Module Max) of different Models, focused at the moment on LLMs
Product OS : Models provides capabilities and tooling for running different models, including natural language models
Use the Rust crate package manager cargo to install Product OS : Models.
cargo add product-os-models
or add Product OS : Models to your cargo.toml [packages] section.
product-os-models = { version = "0.0.4", features = [], default-features = true, optional = false }
Product OS Models supports a number of features leveraging existing Rust libraries including:
// Feature samples TODO
use product_os_models::{parameters::Parameters, model_runner::ModelRunner};
use product_os_models::ollama::Ollama;
let mut params = Parameters::new(model_definition);
params.root_url = Some("http://localhost:11434".to_string());
params.stream_response = true;
let ollama = Ollama::new(params);
let mut runner = ModelRunner::Ollama(ollama);
use product_os_models::{parameters::Parameters, model_runner::ModelRunner};
use product_os_models::vllm::VLlm;
let mut params = Parameters::new(model_definition);
params.root_url = Some("http://localhost:8000".to_string());
params.api_endpoint = Some("/v1/completions".to_string());
params.stream_response = true;
let vllm = VLlm::new(params);
let mut runner = ModelRunner::VLlm(vllm);
use product_os_models::{parameters::Parameters, model_runner::ModelRunner};
use product_os_models::sglang::SGLang;
let mut params = Parameters::new(model_definition);
params.root_url = Some("http://localhost:30000".to_string());
params.api_endpoint = Some("/v1/completions".to_string());
params.stream_response = true;
let sglang = SGLang::new(params);
let mut runner = ModelRunner::SGLang(sglang);
use product_os_models::{parameters::Parameters, model_runner::ModelRunner};
use product_os_models::max::Max;
let mut params = Parameters::new(model_definition);
params.root_url = Some("http://localhost:8000".to_string());
params.api_endpoint = Some("/v1/completions".to_string());
params.stream_response = true;
let max = Max::new(params);
let mut runner = ModelRunner::Max(max);
http://localhost:11434http://localhost:8000http://localhost:30000http://localhost:8000All API-based implementations (Ollama, vLLM, SGLang, MAX) use the external_model_name field in ModelDefinition to specify the model identifier. This unified approach works across all backends:
use product_os_models::model::ModelDefinition;
let model_def = ModelDefinition {
name: "my-model".to_string(),
external_model_name: "llama3.1:8b".to_string(), // Works with Ollama, vLLM, SGLang, MAX
// ... other fields
};
You can override the model name at runtime using Parameters::api_model_name:
params.api_model_name = Some("different-model-name".to_string());
Contributions are not currently available but will be available on a public repository soon.