smoltoken

Crates.iosmoltoken
lib.rssmoltoken
version
sourcesrc
created_at2024-12-03 14:21:28.04868
updated_at2024-12-07 08:41:28.237495
descriptionA fast library for Byte Pair Encoding (BPE) tokenization.
homepagehttps://github.com/svarunid/smoltoken
repositoryhttps://github.com/svarunid/smoltoken/rust
max_upload_size
id1470036
Cargo.toml error:TOML parse error at line 18, column 1 | 18 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include`
size0
Arun S V (svarunid)

documentation

README

SmolToken

SmolToken is a fast Rust library for tokenizing text using the Byte Pair Encoding (BPE) algorithm. Inspired by OpenAI's tiktoken, SmolToken is designed to fill a critical gap by enabling BPE training from scratch while maintaining high performance for encoding and decoding tasks.

Unlike tiktoken, SmolToken supports training tokenizers on custom data. Up to ~4x faster than the port of unoptimized educational implementation _educational.py in rust.

Benchmark Results

SmolToken is already faster than baseline educational implementation of BPE training:

Implementation Runtime (sec)
Unoptimized Implementation 36.94385
SmolToken Optimized 17.63223
SmolToken (with rayon) 7.489850

Tested on:

  • Vocabulary size: 500
  • Dataset: Tiny Stories (~18 MB)

Installation

You can add SmolToken to your Rust project via crates.io:

cargo add smoltoken

Example Usage

Here’s a quick example of how to use SmolToken in your Rust project:

use std::collections::HashSet;
//
use fancy_regex::Regex;
use smoltoken::BytePairTokenizer;
//
// Define a simple pattern and some training data.
let pattern = Regex::new(r"\w+|\S").unwrap();
let data = "hello hello world";
//
// Special tokens to be handled explicitly.
let special_tokens: HashSet<&str> = HashSet::from(["<unk>", "<pad>"]);
//
// Train a BPE tokenizer with a vocabulary size of 300.
let tokenizer = BytePairTokenizer::train(data, r"\w+|\S", 300, special_tokens.clone());
//
// Encode text into token ranks.
let encoded = tokenizer.encode("hello <unk> world", special_tokens.clone());
println!("Encoded: {:?}", encoded);
//
// Decode token ranks back into text.
let decoded = tokenizer.decode_ordinary(&encoded).unwrap();
println!("Decoded: {}", decoded);

Roadmap

  • Concurrency: Add multi-threading support using rayon for faster training, encoding, and decoding.
  • Python Bindings: Integrate with Python using PyO3 to make it accessible for Python developers.
  • Further Optimizations: Push for performance on par with HuggingFace's tokenizer.

Contributing

We very much welcome contributions to make Smoltoken fast, robust and efficient. Make a fork, create a feature branch if needed and sumbit your pull request. Since, the library itself is in its early release stage, I also expect to get community feedback to improve on. Just raise an issue here and we will fix them promptly.

License

SmolToken is open source and licensed under the MIT License.

Acknowledgements

Special thanks to OpenAI's tiktoken for inspiration and foundational ideas.

Commit count: 0

cargo fmt