| Crates.io | spel-right |
| lib.rs | spel-right |
| version | 0.5.1 |
| created_at | 2025-10-27 22:09:02.251525+00 |
| updated_at | 2026-01-13 16:58:47.726909+00 |
| description | A fast and lightweight spell checker and suggester. |
| homepage | |
| repository | https://github.com/Zefirchiky/SpelRight |
| max_upload_size | |
| id | 1903799 |
| size | 7,458,359 |
Yes, it is intentional.
A simple Spell Checker written in Rust. Includes CLI and lib.
Also available in crates.io!
Supports any utf-8 (kinda, WIP), as long as input file is of right format (look Dataset Fixer or load_words_dict).
Was primarily written for MangaHub project's Novel ecosystem. And to learn Rust :D
[!NOTE] For now, only supports bytes processing, WIP
On my i5-12450H laptop with VSC opened.
English.
Load and parse 4mb file with 370105 words in ~<2ms.
Words spelling check ~50,000,000 words/s for all correct words (worst case scenario, batch_par_check).
Sorted suggestions for 1000 incorrect words in ~63ms (~15800 words/s, words case scenario, batch_par_suggest).
Memory usage is minimal, a few big strings of all words without a delimiters + a small vec of information. Totaling dict size + ~200 bytes (depending on the biggest word's length) + additional cost of some operations.
spell.exe in %PATH%. words.txt in the same folder.
> spell funny wrd sjdkfhsdjfh
✅ funny
❓ wrd => wro wry word wad rd wird ord urd ward wd
❌ Wrong word 'sjdkfhsdjfh', no suggestions
Storing words of each length in immutable (optional) blobs, sorted by bytes.
Store info about those blobs: len and/or count.
Pros:
O(log n)Cons:
Pros totally outweigh the Cons!
When iterating over each LenGroup, based on max difference, we can calculate maximum amount of deletions, insertions and substitutions.
As an example:
Checking nothng (group 6) against group 7, the difference between them is 1 insertion and 1 (optional) substitution.
With one insertion, nothng will become group 7, and with optional substitution it can match other words.
There will always be exactly max_dif of max_delete + max_insert + max_substitution.
This is multiple times faster then any other distance finding algorithm.
Checking word correctness
Suggesting similar words
Adding new words
Support different languages
Full languages support
Make good CLI
Make it fast
Suggestions (12500 words/s)
Loading (2.2 ms)
\n)Total memory usage is pretty much minimal.
[!NOTE] read_to_string of 370000 words (~4 mb) is about 2 ms.
on my machine.
Better dataset\n[!NOTE] Made it harder to work manually with dataset.