Crates.io | tvm |
lib.rs | tvm |
version | 0.1.1-alpha |
source | src |
created_at | 2018-05-18 18:38:57.321728 |
updated_at | 2021-02-23 18:49:06.318744 |
description | Rust frontend support for TVM |
homepage | https://github.com/apache/tvm |
repository | https://github.com/apache/tvm |
max_upload_size | |
id | 66030 |
size | 149,775 |
This crate provides an idiomatic Rust API for TVM.
The code works on Stable Rust and is tested against rustc 1.47
.
You can find the API Documentation here.
The goal of this crate is to provide bindings to both the TVM compiler and runtime APIs. First train your Deep Learning model using any major framework such as PyTorch, Apache MXNet or TensorFlow. Then use TVM to build and deploy optimized model artifacts on a supported devices such as CPU, GPU, OpenCL and specialized accelerators.
The Rust bindings are composed of a few crates:
These crates have been recently refactored and reflect a much different philosophy than previous bindings, as well as much increased support for more of the TVM API including exposing all of the compiler internals.
These are still very much in development and should not be considered stable, but contributions and usage is welcome and encouraged. If you want to discuss design issues check our Discourse forum and for bug reports check our GitHub repository.
Please follow the TVM install instructions, export TVM_HOME=/path/to/tvm
and add libtvm_runtime
to your LD_LIBRARY_PATH
.
Note: To run the end-to-end examples and tests, tvm
and topi
need to be added to your PYTHONPATH
or it's automatic via an Anaconda environment when it is installed individually.