Crates.io | visual-search |
lib.rs | visual-search |
version | 0.1.2 |
source | src |
created_at | 2021-06-18 11:44:57.166325 |
updated_at | 2022-02-18 15:25:58.369058 |
description | Visual search engine for images using Deep Learning models to extract features |
homepage | https://github.com/recoai/visual-search |
repository | |
max_upload_size | |
id | 411736 |
size | 4,590,644 |
Rust web application for visual search. It is a component of RecoAI which is a fully featured engine for e-commerce recommendation systems.
Visual Search in Rust is a single responsibility server/library performing similar images queries. It works by extracting features using a selected deep learning model and indexing them using an approximate nearest neighbors algorithm.
Below are examples of search results using a dataset of ecommerce images. Each collection has about 500-600 images.
See example how to use the SDK
visual-search
wraps ONNX format and creates a structure that includes:
As far as we know this structure should be able to define any model from the ONNX repository. From the model we extract image features and index them in a predefined collection of images.
let model_config = ModelConfig {
model_name: "SqueezeNet".into(),
model_url: "https://github.com/onnx/models/raw/master/vision/classification/squeezenet/model/squeezenet1.1-7.onnx".into(),
image_transformation: TransformationPipeline {
steps: vec![
ResizeRGBImageAspectRatio { image_size: ImageSize { width: 224, height: 224 }, scale: 87.5, filter: FilterType::Nearest }.into(),
CenterCrop { crop_size: ImageSize {width: 224, height: 224} }.into(),
ToArray {}.into(),
Normalization { sub: [0.485, 0.456, 0.406], div: [0.229, 0.224, 0.225], zeroone: true }.into(),
ToTensor {}.into(),
]
},
image_size: ImageSize { width: 224, height: 224 },
layer_name: Some("squeezenet0_pool3_fwd".to_string()),
channels: Channels::CWH
}
From source:
cargo build --release
target/release/image-embedding-rust --config config/config.toml
For production remember to change the bearer token in config.toml
It takes 100 seconds to index 1000 images using MobileNetV2 backbone model using 4 workers.
Searching for a single image takes 150 milliseconds.
If you are interested in support please write us an e-mail at pawel(at)logicai.io.
We chose AGPL v3, if you want to use this crate for commercial purposes you must comply with the license.