| Crates.io | speakstream |
| lib.rs | speakstream |
| version | 0.4.3 |
| created_at | 2025-06-18 03:45:43.635755+00 |
| updated_at | 2025-06-29 05:20:05.932619+00 |
| description | A streaming text-to-speech library using OpenAI's API. |
| homepage | |
| repository | |
| max_upload_size | |
| id | 1716517 |
| size | 166,879 |
____ _ ____ _
/ ___| _ __ ___ __ _| | __/ ___|| |_ _ __ ___ __ _ _ __ ___
\___ \| '_ \ / _ \/ _` | |/ /\___ \| __| '__/ _ \/ _` | '_ ` _ \
___) | |_) | __/ (_| | < ___) | |_| | | __/ (_| | | | | | |
|____/| .__/ \___|\__,_|_|\_\|____/ \__|_| \___|\__,_|_| |_| |_|
|_|
A streaming text-to-speech library built on OpenAI's API. Feed tokens as they arrive and hear them spoken back in real time.
default-device-sink crate.Add SpeakStream to your Cargo.toml:
[dependencies]
speakstream = { path = "path/to/speakstream" }
Then build the library:
cargo build --release
use speakstream::ss::SpeakStream;
use async_openai::types::Voice;
#[tokio::main]
async fn main() {
let mut speak = SpeakStream::new(Voice::Ash, 1.0, true, false);
speak.add_token("Hello, world!");
speak.complete_sentence();
}
Audio ducking can be enabled when creating a new stream or toggled at runtime:
let mut speak = SpeakStream::new(Voice::Ash, 1.0, true, true);
assert!(speak.is_audio_ducking_enabled());
speak.set_audio_ducking_enabled(false);
// manually start ducking when the user begins talking
speak.start_audio_ducking();
// ...record microphone input...
speak.stop_audio_ducking();
MIT