Crates.io | orkhon |
lib.rs | orkhon |
version | 0.2.3 |
source | src |
created_at | 2019-05-27 12:36:51.338218 |
updated_at | 2021-02-01 15:23:31.597984 |
description | Machine Learning Inference Framework and Server Runtime |
homepage | https://github.com/vertexclique/orkhon |
repository | https://github.com/vertexclique/orkhon |
max_upload_size | |
id | 137332 |
size | 109,200 |
Latest Release | |
License | |
Build Status | |
Downloads | |
Gitter |
Orkhon is Rust framework for Machine Learning to run/use inference/prediction code written in Python, frozen models and process unseen data. It is mainly focused on serving models and processing unseen data in a performant manner. Instead of using Python directly and having scalability problems for servers this framework tries to solve them with built-in async API.
You can include Orkhon into your project with;
[dependencies]
orkhon = "0.2"
You will need:
pymodel
feature, Python dev dependencies should be installed and have proper python runtime to use Orkhon with your project.PYTHONHOME
environment variable to your Python installation.For Python API contract you can take a look at the Project Documentation.
use orkhon::prelude::*;
use orkhon::tcore::prelude::*;
use orkhon::ttensor::prelude::*;
use rand::*;
use std::path::PathBuf;
let o = Orkhon::new()
.config(
OrkhonConfig::new()
.with_input_fact_shape(InferenceFact::dt_shape(f32::datum_type(), tvec![10, 100])),
)
.tensorflow(
"model_which_will_be_tested",
PathBuf::from("tests/protobuf/manual_input_infer/my_model.pb"),
)
.shareable();
let mut rng = thread_rng();
let vals: Vec<_> = (0..1000).map(|_| rng.gen::<f32>()).collect();
let input = tract_ndarray::arr1(&vals).into_shape((10, 100)).unwrap();
let o = o.get();
let handle = async move {
let processor = o.tensorflow_request_async(
"model_which_will_be_tested",
ORequest::with_body(TFRequest::new().body(input.into())),
);
processor.await
};
let resp = block_on(handle).unwrap();
This example needs onnxmodel
feature enabled.
use orkhon::prelude::*;
use orkhon::tcore::prelude::*;
use orkhon::ttensor::prelude::*;
use rand::*;
use std::path::PathBuf;
let o = Orkhon::new()
.config(
OrkhonConfig::new()
.with_input_fact_shape(InferenceFact::dt_shape(f32::datum_type(), tvec![10, 100])),
)
.onnx(
"model_which_will_be_tested",
PathBuf::from("tests/protobuf/onnx_model/example.onnx"),
)
.build();
let mut rng = thread_rng();
let vals: Vec<_> = (0..1000).map(|_| rng.gen::<f32>()).collect();
let input = tract_ndarray::arr1(&vals).into_shape((10, 100)).unwrap();
let resp = o
.onnx_request(
"model_which_will_be_tested",
ORequest::with_body(ONNXRequest::new().body(input.into())),
)
.unwrap();
assert_eq!(resp.body.output.len(), 1);
License is MIT
Official documentation is hosted on docs.rs.
Please head to our Gitter or use StackOverflow
We use Gitter for development discussions. Also please don't hesitate to open issues on GitHub ask for features, report bugs, comment on design and more! More interaction and more ideas are better!
All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
A detailed overview on how to contribute can be found in the CONTRIBUTING guide on GitHub.