unsloth-rs-core

Crates.iounsloth-rs-core
lib.rsunsloth-rs-core
version0.1.0
created_at2026-01-17 16:00:43.291336+00
updated_at2026-01-17 16:00:43.291336+00
descriptionFast, efficient implementation of Unsloth optimization kernels and training loop in Rust.
homepage
repositoryhttps://github.com/Moaisus-admin/unsloth-rs
max_upload_size
id2050715
size189,229
Moaisus (Moaisus-admin)

documentation

README

Unsloth Rust 🦥🦀

Unsloth, but faster. Written in Rust.

This library implements high-performance optimization kernels (Flash Attention, RoPE, RMSNorm) and training loops for LLMs using Rust and Candle. It provides a drop-in replacement API for the original Unsloth library.

Features

  • Zero-Dependency Python Install: Pre-built wheels for Linux, macOS, and Windows.
  • High Performance: Custom CUDA kernels and Rust-based orchestration.
  • Native Compatibility: Includes unsloth_native API that mimics unsloth.
  • LoRA Support: Efficient LoRA injection and training.
  • GGUF Export: Merge and save directly to Safetensors for GGUF conversion.

Installation

Python (Recommended)

pip install unsloth_rs

Rust

Add to your Cargo.toml:

[dependencies]
unsloth_rs = "0.1.0"

Usage

from unsloth_native import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    load_in_4bit=True
)
# ... training code ...

License

Apache-2.0

Commit count: 6

cargo fmt