word-segmenters

Crates.ioword-segmenters
lib.rsword-segmenters
version0.3.2
sourcesrc
created_at2020-11-23 12:21:24.140705
updated_at2020-12-16 10:37:40.255724
descriptionFast English word segmentation
homepagehttps://github.com/InstantDomainSearch/word-segmenters
repositoryhttps://github.com/InstantDomainSearch/word-segmenters
max_upload_size
id315369
size12,298,663
Dirkjan Ochtman (djc)

documentation

https://docs.rs/word-segmenters

README

word-segmenters: fast English word segmentation in Rust

Build status License: Apache 2.0

This crate has been renamed. Refer to instant-segment for the latest updates.

word-segmenters is a fast Apache-2.0 library for English word segmentation. It is based on the Python wordsegment project written by Grant Jenkins, which is in turn based on code from Peter Norvig's chapter Natural Language Corpus Data from the book Beautiful Data (Segaran and Hammerbacher, 2009).

The data files in this repository are derived from the Google Web Trillion Word Corpus, as described by Thorsten Brants and Alex Franz, and distributed by the Linguistic Data Consortium. Note that this data "may only be used for linguistic education and research", so for any other usage you should acquire a different data set.

For the microbenchmark included in this repository, word-segmenters is ~17x faster than the Python implementation. Further optimizations are planned -- see the issues. The API has been carefully constructed so that multiple segmentations can share the underlying state (mainly the unigram and bigram maps) to allow parallel usage.

Commit count: 150

cargo fmt