image_captioner

Crates.ioimage_captioner
lib.rsimage_captioner
version0.2.2
sourcesrc
created_at2023-12-06 22:08:00.102826
updated_at2023-12-19 14:57:53.909405
descriptionCreate captions for images automatically using the BLIP deep learning model.
homepage
repositoryhttps://github.com/abusch419/image_captioner.git
max_upload_size
id1060460
size397,997
abusch419 (abusch419)

documentation

README

Image Captioner

Image Captioner makes it trivial to generate image captions from Rust code using the BLIP model from Salesforce. All processing happens on your device. After the initial model download, processing an image takes ~5 seconds on an M1 MacBook Pro, no GPU required.

The captions are pretty good. For this image, the automatically generated caption is "a laptop on fire".

A laptop on fire

Example Usage

Assuming you have an image in your crate root called image.jpg:

use image_captioner::get_caption;
use std::path::Path;

fn main() {
    // This path is relative to the directory you run your Rust application from,
    // usually the crate root.
    let image_path = Path::new("./image.jpg");

    // The first time you run this will be slow since it has to download the model,
    // which is 990 MB.
    match get_caption(image_path) {
        Ok(caption) => println!("Caption: {}", caption),
        Err(err) => eprintln!("Error: {:?}", err),
    }
}

About the BLIP Deep Learning Model

BLIP (Bootstrapping Language-Image Pre-training) is a model released by Salesforce in 2022 that excels at a number of vision + language tasks, including image captioning. It's permissively licensed (BSD 3-Clause), allowing use in both personal and commercial projects.

For more info, see the BLIP model card on Hugging Face.

Model Download

The first time you run image_captioner, it automatically downloads the BLIP model. This process requires an internet connection. The model is 990 MB, so the download may take some time.

The model is downloaded and cached by the transformers Python library. The default cache location is typically:

  • Linux/macOS: ~/.cache/huggingface/hub
  • Windows: C:\Users\<username>\.cache\huggingface\hub

This location may vary based on your system configuration and environment settings. The transformers library manages the cache automatically, storing and retrieving the model for efficient usage in subsequent runs.

Commit count: 22

cargo fmt