Crates.io | bge |
lib.rs | bge |
version | 0.2.0 |
source | src |
created_at | 2024-03-22 08:56:38.925098 |
updated_at | 2024-04-08 08:54:15.600919 |
description | Rust interface for BGE Small English Embedding Library |
homepage | |
repository | https://github.com/pekc83/bge |
max_upload_size | |
id | 1182391 |
size | 18,535 |
This Rust library provides an interface for generating embeddings using the BGE Small English v1.5 model from Hugging Face, specifically designed for dense retrieval applications. The model, part of the FlagEmbedding project, focuses on retrieval-augmented LLMs and offers state-of-the-art performance for embedding generation.
Rust docs: https://docs.rs/bge/latest/bge/struct.Bge.html Crates.io: https://crates.io/crates/bge
The BGE Small English v1.5 model is available on Hugging Face: https://huggingface.co/BAAI/bge-small-en-v1.5. This model is part of the FlagEmbedding project, which includes various tools and models for retrieval-augmented LLMs. For more details, visit the FlagEmbedding GitHub.
To use this library, you will first need to download the necessary model and tokenizer files from Hugging Face:
These files should be saved in a known directory on your local machine.
Ensure Rust is installed on your system. Then, add this library to your project's Cargo.toml
file.
bge
in Your ProjectTo use bge
in your project, add the following to your Cargo.toml
file:
[dependencies]
bge = "0.1.0"
# If your project requires `ort` binaries to be automatically downloaded, include `ort` with the `download-binaries` feature enabled:
ort = { version = "2.0.0-rc.1", default-features = false, features = ["download-binaries"] }
First, initialize the Bge
struct with the paths to the tokenizer and model files:
let bge = Bge::from_files("path/to/tokenizer.json", "path/to/model.onnx").unwrap();
To generate embeddings for a given input text:
let input_text = "Your input text here.";
let embeddings = bge.create_embeddings(input_text).unwrap();
println!("Embeddings: {:?}", embeddings);
This will print the embeddings generated by the model for the input text.
The library can return errors in several scenarios, such as when the input exceeds the model's token limit or if there are issues loading the model. It's recommended to handle these errors appropriately in your application.
Contributions to this library are welcome. If you encounter any issues or have suggestions for improvements, please open an issue or submit a pull request.
This library is licensed under the MIT License. The BGE models provided by Hugging Face can be used for commercial purposes free of charge.