| Crates.io | libinfer |
| lib.rs | libinfer |
| version | 0.0.3 |
| created_at | 2025-06-09 18:49:18.026909+00 |
| updated_at | 2025-08-20 17:17:19.34852+00 |
| description | Rust interface to TensorRT for high-performance GPU inference |
| homepage | |
| repository | https://github.com/saronic-technologies/libinfer |
| max_upload_size | |
| id | 1706289 |
| size | 95,758 |
libinferThis library provides a simple Rust interface to a TensorRT engine using cxx
libinfer allows for seamless integration of TensorRT models into Rust applications with minimal overhead. The library handles the complex C++ interaction with TensorRT while exposing a simple, idiomatic Rust API.
To use this library, you'll need:
TENSORRT_LIBRARIES: Path to TensorRT librariesCUDA_LIBRARIES: Path to CUDA librariesCUDA_INCLUDE_DIRS: Path to CUDA include directoriesAdd to your Cargo.toml:
[dependencies]
libinfer = "0.0.3"
The goal of the API is to keep as much processing in Rust land as possible. Here is a sample usage:
let options = Options {
path: "yolov8n.engine".into(),
device_index: 0,
};
let mut engine = Engine::new(&options).unwrap();
// Get input dimensions of the engine as [Channels, Height, Width]
let dims = engine.get_input_dims();
// Construct a dummy input (uint8 or float32 depending on model)
let input_size = dims.iter().fold(1, |acc, &e| acc * e as usize);
let input = InputTensor {
name: "input".to_string();
data: vec![0u8; input_size];
// Run inference
let output = engine.pin_mut().infer(&input).unwrap();
// Postprocess the output according to your model's output format
// ...
This library is intended to be used with pre-built TensorRT engines created by the Python API or the trtexec CLI tool for the target device.
Result typeRUST_LOG environment variableCheck the examples/ directory for working examples:
basic.rs: Simple inference examplebenchmark.rs: Performance benchmarking with various batch sizesdynamic.rs: Working with dynamic batch sizesfunctional_test.rs: Testing correctness of model outputsRun an example with:
cargo run --example basic -- --path /path/to/model.engine
See the documentation in each example file for specific requirements.
Sync)Send but not SyncMuch of the C++ code is based on the tensorrt-cpp-api repo.