| Crates.io | ort |
| lib.rs | ort |
| version | 2.0.0-rc.11 |
| created_at | 2022-11-26 21:59:51.388503+00 |
| updated_at | 2026-01-07 03:47:23.523151+00 |
| description | A safe Rust wrapper for ONNX Runtime 1.23 - Optimize and accelerate machine learning inference & training |
| homepage | https://ort.pyke.io/ |
| repository | https://github.com/pykeio/ort |
| max_upload_size | |
| id | 723486 |
| size | 639,928 |
ort is a Rust interface for performing hardware-accelerated inference & training on machine learning models in the Open Neural Network Exchange (ONNX) format.
Based on the now-inactive onnxruntime-rs crate, ort is primarily a wrapper for Microsoft's ONNX Runtime library, but offers support for other pure-Rust runtimes.
ort with ONNX Runtime is super quick - and it supports almost any hardware accelerator you can think of. Even still, it's light enough to run on your users' devices.
When you need to deploy a PyTorch/TensorFlow/Keras/scikit-learn/PaddlePaddle model either on-device or in the datacenter, ort has you covered.
ortortOpen a PR to add your project here π
ort to detect, OCR, and inpaint manga pages.ort for local AI deployment in biodiversity conservation efforts.ort for content type detection.ort to deliver high-performance ONNX runtime inference for text embedding models.sbv2-api is a fast implementation of Style-BERT-VITS2 text-to-speech using ort.ort to detect animals, humans and vehicles in trail camera imagery.ort for efficient inference.ort for reliable, fast ONNX inference of PaddleOCR models on Desktop and WASM platforms.ort to power their AI proxy for semantic search applications.ort to provide embedding model inference inside LMDB.ort for accelerated transformer model inference at the edge.FastEmbed-rs uses ort for generating vector embeddings, reranking locally.ort for safe ONNX Runtime bindings in Elixir.ort for fast, on-device real-time dictation with NVIDIA Parakeet and Silero VAD.