Crates.io | smoltoken |
lib.rs | smoltoken |
version | |
source | src |
created_at | 2024-12-03 14:21:28.04868 |
updated_at | 2024-12-07 08:41:28.237495 |
description | A fast library for Byte Pair Encoding (BPE) tokenization. |
homepage | https://github.com/svarunid/smoltoken |
repository | https://github.com/svarunid/smoltoken/rust |
max_upload_size | |
id | 1470036 |
Cargo.toml error: | TOML parse error at line 18, column 1 | 18 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include` |
size | 0 |
SmolToken is a fast Rust library for tokenizing text using the Byte Pair Encoding (BPE) algorithm. Inspired by OpenAI's tiktoken
, SmolToken is designed to fill a critical gap by enabling BPE training from scratch while maintaining high performance for encoding and decoding tasks.
Unlike tiktoken
, SmolToken supports training tokenizers on custom data. Up to ~4x faster than the port of unoptimized educational implementation _educational.py
in rust.
SmolToken is already faster than baseline educational implementation of BPE training:
Implementation | Runtime (sec) |
---|---|
Unoptimized Implementation | 36.94385 |
SmolToken Optimized | 17.63223 |
SmolToken (with rayon) | 7.489850 |
Tested on:
You can add SmolToken to your Rust project via crates.io:
cargo add smoltoken
Here’s a quick example of how to use SmolToken in your Rust project:
use std::collections::HashSet;
//
use fancy_regex::Regex;
use smoltoken::BytePairTokenizer;
//
// Define a simple pattern and some training data.
let pattern = Regex::new(r"\w+|\S").unwrap();
let data = "hello hello world";
//
// Special tokens to be handled explicitly.
let special_tokens: HashSet<&str> = HashSet::from(["<unk>", "<pad>"]);
//
// Train a BPE tokenizer with a vocabulary size of 300.
let tokenizer = BytePairTokenizer::train(data, r"\w+|\S", 300, special_tokens.clone());
//
// Encode text into token ranks.
let encoded = tokenizer.encode("hello <unk> world", special_tokens.clone());
println!("Encoded: {:?}", encoded);
//
// Decode token ranks back into text.
let decoded = tokenizer.decode_ordinary(&encoded).unwrap();
println!("Decoded: {}", decoded);
rayon
for faster training, encoding, and decoding.PyO3
to make it accessible for Python developers.We very much welcome contributions to make Smoltoken fast, robust and efficient. Make a fork, create a feature branch if needed and sumbit your pull request. Since, the library itself is in its early release stage, I also expect to get community feedback to improve on. Just raise an issue here and we will fix them promptly.
SmolToken is open source and licensed under the MIT License.
Special thanks to OpenAI's tiktoken
for inspiration and foundational ideas.