pyannote-rs

Crates.iopyannote-rs
lib.rspyannote-rs
version
sourcesrc
created_at2024-08-06 15:57:20.209334
updated_at2024-11-30 01:47:44.305686
descriptionSpeaker diarization using pyannote in Rust
homepage
repository
max_upload_size
id1327464
Cargo.toml error:TOML parse error at line 17, column 1 | 17 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include`
size0
(thewh1teagle)

documentation

README

pyannote-rs

Crates License

Pyannote audio diarization in Rust

Features

  • Compute 1 hour of audio in less than a minute on CPU.
  • Faster performance with DirectML on Windows and CoreML on macOS.
  • Accurate timestamps with Pyannote segmentation.
  • Identify speakers with wespeaker embeddings.

Install

cargo add pyannote-rs

Usage

See Building

Examples

See examples

How it works

pyannote-rs uses 2 models for speaker diarization:

  1. Segmentation: segmentation-3.0 identifies when speech occurs.
  2. Speaker Identification: wespeaker-voxceleb-resnet34-LM identifies who is speaking.

Inference is powered by onnxruntime.

  • The segmentation model processes up to 10s of audio, using a sliding window approach (iterating in chunks).
  • The embedding model processes filter banks (audio features) extracted with knf-rs.

Speaker comparison (e.g., determining if Alice spoke again) is done using cosine similarity.

Credits

Big thanks to pyannote-onnx and kaldi-native-fbank

Commit count: 0

cargo fmt