| Crates.io | chinese_segmenter |
| lib.rs | chinese_segmenter |
| version | 1.0.1 |
| created_at | 2020-05-05 17:58:48.291745+00 |
| updated_at | 2022-08-02 06:08:19.395019+00 |
| description | Tokenize Chinese sentences using a dictionary-driven largest first matching approach. |
| homepage | |
| repository | https://github.com/sotch-pr35mac/chinese_segmenter |
| max_upload_size | |
| id | 237818 |
| size | 8,133 |
Segment Chinese sentences into component words using a dictionary-driven largest first matching approach.
extern crate chinese_segmenter;
use chinese_segmenter::{initialize, tokenize};
let sentence = "今天晚上想吃羊肉吗?";
initialize(); // Optional intialization to load data
let result: Vec<&str> = tokenize(sentence);
println!("{:?}", result); // --> ['今天', '晚上', '想', '吃', '羊肉', '吗']