| Crates.io | cuda-rt |
| lib.rs | cuda-rt |
| version | 0.7.2 |
| created_at | 2025-11-06 08:03:28.333026+00 |
| updated_at | 2025-11-06 08:03:28.333026+00 |
| description | Manga translation tools |
| homepage | https://koharu.rs |
| repository | https://github.com/mayocream/koharu |
| max_upload_size | |
| id | 1919277 |
| size | 69,706 |
Automated manga translation tool with LLM, written in Rust.
Koharu introduces a new workflow for manga translation, utilizing the power of LLMs to automate the process. It combines the capabilities of object detection, OCR, inpainting, and LLMs to create a seamless translation experience.
Under the hood, Koharu uses ort and candle for high-performance inference, and uses Tauri for the GUI. All components are written in Rust, ensuring safety and speed.
[!NOTE] For help and support, please join our Discord server.
Currently, Koharu only supports NVIDIA GPUs via CUDA.
Koharu is built with CUDA support, allowing it to leverage the power of NVIDIA GPUs for faster processing.
Koharu bundles CUDA toolkit 12 and cuDNN 9, so you don't need to install them separately. Just make sure you have the appropriate NVIDIA drivers installed on your system.
Koharu relies on a mixin of ONNX models and LLM models to perform various tasks.
Koharu uses several pre-trained models for different tasks:
The models will be automatically downloaded when you run Koharu for the first time.
We convert the original models to ONNX format for better performance and compatibility with Rust. The converted models are hosted on Hugging Face.
Koharu supports various quantized LLM models in GGUF format via candle. Currently supported models include:
[!NOTE] Please open an issue if you want support for other models.
You can download the latest release of Koharu from the releases page.
We provide pre-built binaries for Windows, for other platforms, you may need to build from source, see the Development section below.
bun install
candle with CUDA featureThe LLM feature heavily relies on candle. To compile candle-kernel with CUDA support, you need:
Download and install CUDA toolkit 12.9, and follow below steps to set up environment variables:
bin directory to your PATH environment variable (e.g., C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.9\bin).CUDA_PATH environment variable to point to your CUDA installation directory (e.g., C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.9).nvcc is accessible from the command line by running nvcc --version.Download and install Visual Studio 2022, during installation, make sure to select the "Desktop development with C++" workload. Then, follow below steps to set up environment variables:
cl.exe by running where cl.cl.exe to your PATH environment variable.bun tauri build
# enable CUDA acceleration
bun tauri build --features cuda
After building, you can run the Koharu binary located in target/release/.
Koharu is licensed under the GNU General Public License v3.0.