| Crates.io | jieba-macros |
| lib.rs | jieba-macros |
| version | 0.8.1 |
| created_at | 2024-12-25 14:50:10.058441+00 |
| updated_at | 2025-09-07 11:50:48.8142+00 |
| description | jieba-rs proc-macro |
| homepage | |
| repository | https://github.com/messense/jieba-rs |
| max_upload_size | |
| id | 1495059 |
| size | 527,182 |
🚀 Help me to become a full-time open-source developer by sponsoring me on GitHub
The Jieba Chinese Word Segmentation Implemented in Rust
Add it to your Cargo.toml:
[dependencies]
jieba-rs = "0.8"
then you are good to go. If you are using Rust 2015 you have to extern crate jieba_rs to your crate root as well.
use jieba_rs::Jieba;
fn main() {
let jieba = Jieba::new();
let words = jieba.cut("我们中出了一个叛徒", false);
assert_eq!(words, vec!["我们", "中", "出", "了", "一个", "叛徒"]);
}
default-dict feature enables embedded dictionary, this features is enabled by defaulttfidf feature enables TF-IDF keywords extractortextrank feature enables TextRank keywords extractor[dependencies]
jieba-rs = { version = "0.7", features = ["tfidf", "textrank"] }
cargo bench --all-features
jieba-rs bindings@node-rs/jieba NodeJS bindingjieba-php PHP bindingrjieba-py Python bindingcang-jie Chinese tokenizer for tantivytantivy-jieba An adapter that bridges between tantivy and jieba-rsjieba-wasm the WebAssembly bindingThis work is released under the MIT license. A copy of the license is provided in the LICENSE file.