cuda-rt

Crates.iocuda-rt
lib.rscuda-rt
version0.7.2
created_at2025-11-06 08:03:28.333026+00
updated_at2025-11-06 08:03:28.333026+00
descriptionManga translation tools
homepagehttps://koharu.rs
repositoryhttps://github.com/mayocream/koharu
max_upload_size
id1919277
size69,706
Mayo Takanashi (mayocream)

documentation

README

Koharu

Automated manga translation tool with LLM, written in Rust.

Koharu introduces a new workflow for manga translation, utilizing the power of LLMs to automate the process. It combines the capabilities of object detection, OCR, inpainting, and LLMs to create a seamless translation experience.

Under the hood, Koharu uses ort and candle for high-performance inference, and uses Tauri for the GUI. All components are written in Rust, ensuring safety and speed.

[!NOTE] For help and support, please join our Discord server.

Features

  • Automated Workflow: From image input to translated output, Koharu automates the entire manga translation process.
  • GPU Acceleration: Leverages NVIDIA GPUs via CUDA for faster processing.
  • High-Quality Models: Utilizes state-of-the-art ONNX models for text detection, OCR, and inpainting.
  • LLM Integration: Supports various quantized LLM models in GGUF format for translation tasks.
  • User-Friendly GUI: Built with Tauri, providing an intuitive interface for users.

GPU Acceleration

Currently, Koharu only supports NVIDIA GPUs via CUDA.

CUDA

Koharu is built with CUDA support, allowing it to leverage the power of NVIDIA GPUs for faster processing.

Koharu bundles CUDA toolkit 12 and cuDNN 9, so you don't need to install them separately. Just make sure you have the appropriate NVIDIA drivers installed on your system.

Models

Koharu relies on a mixin of ONNX models and LLM models to perform various tasks.

ONNX Models

Koharu uses several pre-trained models for different tasks:

The models will be automatically downloaded when you run Koharu for the first time.

We convert the original models to ONNX format for better performance and compatibility with Rust. The converted models are hosted on Hugging Face.

LLM Models

Koharu supports various quantized LLM models in GGUF format via candle. Currently supported models include:

[!NOTE] Please open an issue if you want support for other models.

Installation

You can download the latest release of Koharu from the releases page.

We provide pre-built binaries for Windows, for other platforms, you may need to build from source, see the Development section below.

Development

Prerequisites

  • Rust (1.85 or later)
  • Bun (1.0 or later)
  • Python (3.12 or later) (optional)

Install dependencies

bun install

Compile candle with CUDA feature

The LLM feature heavily relies on candle. To compile candle-kernel with CUDA support, you need:

  1. Download and install CUDA toolkit 12.9, and follow below steps to set up environment variables:

    1. Add the CUDA bin directory to your PATH environment variable (e.g., C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.9\bin).
    2. Set the CUDA_PATH environment variable to point to your CUDA installation directory (e.g., C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.9).
    3. Make sure nvcc is accessible from the command line by running nvcc --version.
  2. Download and install Visual Studio 2022, during installation, make sure to select the "Desktop development with C++" workload. Then, follow below steps to set up environment variables:

    1. Open "x64 Native Tools Command Prompt for VS 2022" from the Start menu, and find the path of cl.exe by running where cl.
    2. Add the directory containing cl.exe to your PATH environment variable.

Build

bun tauri build

# enable CUDA acceleration
bun tauri build --features cuda

Usage

After building, you can run the Koharu binary located in target/release/.

Related Projects

  • LabelPlus - A manga annotation tool with Photoshop integration.
  • LunaTranslator - Translation tool for visual novels and games.

License

Koharu is licensed under the GNU General Public License v3.0.

Commit count: 0

cargo fmt