| Crates.io | ortn |
| lib.rs | ortn |
| version | 1.19.2 |
| created_at | 2024-09-27 00:09:11.556793+00 |
| updated_at | 2024-09-27 00:09:11.556793+00 |
| description | Rust bindings for ONNXRuntime |
| homepage | |
| repository | |
| max_upload_size | |
| id | 1388135 |
| size | 79,173 |
ortnYet another minimum rust binding for onnxruntime c_api, inspired by onnxruntime-rs.
c_api is wrapped, enough to run a onnx model.'a lifetime generic...rust compare to use onnxruntime c_api.latest onnxruntime version on different platform, NO feature flag introduced by multi-version of onnxruntime.onnxruntime.dll, libonnxruntime.[so|dyn]) supported.ndarray is used to handle input/output tensor| OS | onnxuntime version |
Arch | CPU | CUDA | TensorRT | CANN |
|---|---|---|---|---|---|---|
| mac | 1.19.2 | aarch64 | ✅ | |||
| mac | 1.19.2 | intel64 | ✅ | |||
| linux | 1.19.2 | intel64 | ✅ | ✅ | ✅ | TODO |
| windows | TODO | intel64 | TODO | TODO | TODO |
please download onnxruntime first, unzip, or build it from source.
before start everything, setup environment variable to help ortn to find header or libraries needed.
ORT_LIB_DIR:
libonnxruntime.[so|dylib] or onnxruntime.dll locatedORT_INC_DIR:
onnxruntime/onnxruntime_c_api.h locatedDYLD_LIBRARY_PATH:
libonnxruntime.dylib locatedLD_LIBRARY_PATH:
libonnxruntime.so locatedPATH:
onnxruntime.dll locateduse ndarray::Array4;
use ndarray_rand::{rand_distr::Uniform, RandomExt};
use ortn::prelude::*;
std::env::set_var("RUST_LOG", "trace");
let _ = tracing_subscriber::fmt::try_init();
let output = Session::builder()
// create env and use it as session's env
.with_env(
Environment::builder()
.with_name("minst")
.with_level(OrtLoggingLevel::ORT_LOGGING_LEVEL_VERBOSE)
.build()?,
)
// disable all optimization
.with_graph_optimization_level(GraphOptimizationLevel::ORT_DISABLE_ALL)
// set session intra threads to 4
.with_intra_threads(4)
// build model
.build(include_bytes!("../models/mnist.onnx"))?
// run model
.run([
// convert input tensor to ValueView
ValueBorrowed::try_from(
// create random input tensor
Array4::random([1, 1, 28, 28], Uniform::new(0., 1.)).view(),
)?,
])?
// output is a vector, we need to get first result
.into_iter()
.next()
.unwrap()
// view output as a f32 array
.view::<f32>()?
// the output is owned by session, copy it out as a owned tensor/ndarray
.to_owned();
tracing::info!(?output);
Result::Ok(())
In case bindings need to be update, just:
git clone https://github.com/yexiangyu/ortn
export ORT_LIB_DIR=/path/to/onnxruntime/lib
export ORT_INC_DIR=/path/to/onnxruntime/include
bindgen enabledcargo build --features bindgen
f16, i64 ...rocm and cannonnxruntime-agitraining api