xla

Crates.ioxla
lib.rsxla
version0.1.6
sourcesrc
created_at2023-04-08 19:24:52.445511
updated_at2023-10-02 17:50:57.86915
descriptionBindings for the XLA C++ library.
homepage
repositoryhttps://github.com/LaurentMazare/xla-rs
max_upload_size
id833753
size254,289
Laurent Mazare (LaurentMazare)

documentation

README

xla-rs

Experimentation using the xla compiler from rust

Pre-compiled binaries for the xla library can be downloaded from the elixir-nx/xla repo. These should be extracted at the root of this repository, resulting in a xla_extension subdirectory being created, the currently supported version is 0.5.1.

For a linux platform, this can be done via:

wget https://github.com/elixir-nx/xla/releases/download/v0.5.1/xla_extension-x86_64-linux-gnu-cpu.tar.gz
tar -xzvf xla_extension-x86_64-linux-gnu-cpu.tar.gz

If the xla_extension directory is not in the main project directory, the path can be specified via the XLA_EXTENSION_DIR environment variable.

Generating some Text Samples with LLaMA

The LLaMA large language model can be used to generate text. The model weights are only available after completing this form and once downloaded can be converted to a format this crate can use. This requires a GPU with 16GB of memory or 32GB of memory when running on cpu (using the -cpu flag).

# Download the tokenizer config.
wget https://huggingface.co/hf-internal-testing/llama-tokenizer/raw/main/tokenizer.json -O llama-tokenizer.json

# Extract the pre-trained weights, this requires the transformers python library to be installed.
# This creates a npz file storing all the weights.
python examples/llama/convert_checkpoint.py ..../LLaMA/7B/consolidated.00.pth

# Run the example.
cargo run --example llama --release

Generating some Text Samples with GPT2

One of the featured examples is GPT2. In order to run it, one should first download the tokenization configuration file as well as the weights before running the example. In order to do this, run the following commands:

# Download the vocab file.
wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe

# Extract the pre-trained weights, this requires the transformers python library to be installed.
# This creates a npz file storing all the weights.
python examples/nanogpt/get_weights.py

# Run the example.
cargo run --example nanogpt --release
Commit count: 179

cargo fmt