🚀 Help me to become a full-time open-source developer by sponsoring me on GitHub
The Jieba Chinese Word Segmentation Implemented in Rust
Add it to your Cargo.toml:
[dependencies]
jieba-rs = "0.8"then you are good to go. If you are using Rust 2015 you have to extern crate jieba_rs to your crate root as well.
use jieba_rs::Jieba;
fn main() {
    let jieba = Jieba::new();
    let words = jieba.cut("我们中出了一个叛徒", false);
    assert_eq!(words, vec!["我们", "中", "出", "了", "一个", "叛徒"]);
}- default-dictfeature enables embedded dictionary, this features is enabled by default
- tfidffeature enables TF-IDF keywords extractor
- textrankfeature enables TextRank keywords extractor
[dependencies]
jieba-rs = { version = "0.7", features = ["tfidf", "textrank"] }cargo bench --all-features- @node-rs/jiebaNodeJS binding
- jieba-phpPHP binding
- rjieba-pyPython binding
- cang-jieChinese tokenizer for tantivy
- tantivy-jiebaAn adapter that bridges between tantivy and jieba-rs
- jieba-wasmthe WebAssembly binding
This work is released under the MIT license. A copy of the license is provided in the LICENSE file.