Crates.io | ds-transcriber |
lib.rs | ds-transcriber |
version | 1.0.1 |
source | src |
created_at | 2021-05-29 22:30:09.211857 |
updated_at | 2022-03-22 12:47:37.977641 |
description | A crate using DeepSpeech bindings to convert mic audio from speech to text |
homepage | |
repository | https://github.com/kawaki-san/ds-transcriber |
max_upload_size | |
id | 403663 |
size | 47,294 |
You can think of this crate as a wrapper for RustAudio's deepspeech-rs. It aims to provide transcription for microphone streams with optional denoising see cargo-features
below.
This example shows the quickest way to get started with ds-transcriber. First, add ds-transcriber
to your Cargo.toml
ds-transcriber = "1"
Download the DeepSpeech native client and then add its directory to your LD_LIBRARY_PATH
and LIBRARY_PATH
variables.
Have a look at StreamSettings to fine tune the transcription stream to parameters that better suit your environment
let mut model = ds_transcriber::model::instance_model(
"model_file.pbmm",
Some("scorer_file.scorer"),
)?;
let config = ds_transcriber::StreamSettings::default();
let i_said = ds_transcriber::transcribe(config, &mut model)?;
println!("I said: {}", i_said);
Rinse and repeat the last two lines
This crate provides an optional feature of denoising of the audio stream (may result in better transcription). It is disabled by default, to enable it: use either the denoise
or full
key in the crate's features list.
ds-transcriber = { version = "1", features = ["denoise"] } # or features = ["full"]
This crate contains an example to get you started. Clone the repository and run it:
For help with arguments, run:
cargo run --example transcribe -- -h
To start the example, run
cargo run --example transcribe -- -m model_path -c deepspeech_native_client_dir
An optional (but recommended) argument for a language model (scorer) path can be provided with -s
or --scorer
This crate also re-exports the deepspeech
and nnnoiseless
crates (if the denoise
feature is enabled). You can use these re-exports instead of also depending on them separately.
Downloading the DeepSpeech model alone will give you results that are passable, at best, (depending on your accent), if you want to significantly improve them, you might also want to download a language model/scorer. It helps in cases like: I read a book last night
vs I red a book last night
. Simply put the scorer in the same directory as your model. The crate will automatically set it when you create your ds_transcriber::model::DeepSpeechModel
If you want to train your own model, for the best results, look into Mimic Recording Studio, it gives you prompts to read from and automatically prepares your audio files with their respective transcriptions for training which you can then use for fine tuning
Always welcome! Open an issue or a PR if you have something in mind