| Crates.io | async-openai-wasm |
| lib.rs | async-openai-wasm |
| version | 0.31.2 |
| created_at | 2024-04-16 11:49:29.224579+00 |
| updated_at | 2025-12-02 07:40:14.090231+00 |
| description | Rust library for OpenAI on WASM |
| homepage | https://github.com/ifsheldon/async-openai-wasm |
| repository | https://github.com/ifsheldon/async-openai-wasm |
| max_upload_size | |
| id | 1210217 |
| size | 1,179,182 |
Async Rust library for OpenAI on WASM
async-openai-wasmis a FORK of async-openai that supports WASM targets by targeting wasm32-unknown-unknown.
That means >99% of the codebase should be attributed to the original project. The synchronization with the original
project is and will be done manually when async-openai releases a new version. Versions are kept in sync
with async-openai releases, which means when async-openai releases x.y.z, async-openai-wasm also releases
a x.y.z version.
async-openai-wasm is an unofficial Rust library for OpenAI, based on OpenAI OpenAPI spec. It implements all APIs from the spec:
| What | APIs | Crate Feature Flags |
|---|---|---|
| Responses API | Responses, Conversations, Streaming events | responses |
| Webhooks | Webhook Events | webhook |
| Platform APIs | Audio, Audio Streaming, Videos, Images, Image Streaming, Embeddings, Evals, Fine-tuning, Graders, Batch, Files, Uploads, Models, Moderations | audio, video, image, embedding, evals, finetuning, grader, batch, file, upload, model, moderation |
| Vector stores | Vector stores, Vector store files, Vector store file batches | vectorstore |
| ChatKit (Beta) | ChatKit | chatkit |
| Containers | Containers, Container Files | container |
| Realtime | Realtime Calls, Client secrets, Client events, Server events | realtime |
| Chat Completions | Chat Completions, Streaming | chat-completion |
| Assistants (Beta) | Assistants, Threads, Messages, Runs, Run steps, Streaming | assistant |
| Administration | Admin API Keys, Invites, Users, Groups, Roles, Role assignments, Projects, Project users, Project groups, Project service accounts, Project API keys, Project rate limits, Audit logs, Usage, Certificates | administration |
| Legacy | Completions | completions |
Features that makes async-openai unique:
More on async-openai-wasm:
examples/reasoningNote on Azure OpenAI Service (AOS): async-openai-wasm primarily implements OpenAI spec, and doesn't try to
maintain parity with spec of AOS. Just like async-openai.
async-openai+ * WASM support
+ * WASM examples
+ * Realtime API: Does not bundle with a specific WS implementation. Need to convert a client event into a WS message by yourself, which is just simple `your_ws_impl::Message::Text(some_client_event.into_text())`
+ * Broader support for OpenAI-compatible Endpoints
+ * Reasoning Model Support
- * Tokio
- * Non-wasm examples: please refer to the original project [async-openai](https://github.com/64bit/async-openai/).
- * Builtin backoff retries: due to [this issue](https://github.com/ihrwein/backoff/issues/61).
- * Recommend: use `backon` with `gloo-timers-sleep` feature instead.
- * File saving: `wasm32-unknown-unknown` on browsers doesn't have access to filesystem.
The library reads API key from the environment
variable OPENAI_API_KEY.
# On macOS/Linux
export OPENAI_API_KEY='sk-...'
# On Windows Powershell
$Env:OPENAI_API_KEY='sk-...'
Other official environment variables supported are: OPENAI_ADMIN_KEY, OPENAI_BASE_URL, OPENAI_ORG_ID, OPENAI_PROJECT_ID
Visit examples directory on how to use async-openai,
and WASM examples in async-openai-wasm.
Visit docs.rs/async-openai for docs.
use async_openai_wasm::{
types::images::{CreateImageRequestArgs, ImageResponseFormat, ImageSize},
Client,
};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
// create client, reads OPENAI_API_KEY environment variable for API key.
let client = Client::new();
let request = CreateImageRequestArgs::default()
.prompt("cats on sofa and carpet in living room")
.n(2)
.response_format(ImageResponseFormat::Url)
.size(ImageSize::S256x256)
.user("async-openai-wasm")
.build()?;
let response = client.images().generate(request).await?;
// Download and save images to ./data directory.
// Each url is downloaded and saved in dedicated Tokio task.
// Directory is created if it doesn't exist.
let paths = response.save("./data").await?;
paths
.iter()
.for_each(|path| println!("Image file path: {}", path.display()));
Ok(())
}
Support for webhook includes event types, signature verification, and building webhook events from payloads.
Enable methods whose input and outputs are generics with byot feature. It creates a new method with same name and _byot suffix.
byot requires trait bounds:
fn input parameter) needs to implement serde::Serialize or std::fmt::Display traitfn ouput parameter) needs to implement serde::de::DeserializeOwned trait.For example, to use serde_json::Value as request and response type:
let response: Value = client
.chat()
.create_byot(json!({
"messages": [
{
"role": "developer",
"content": "You are a helpful assistant"
},
{
"role": "user",
"content": "What do you think about life?"
}
],
"model": "gpt-4o",
"store": false
}))
.await?;
This can be useful in many scenarios:
serde (for example with #[serde(flatten)]).Visit examples/bring-your-own-type directory to learn more.
With byot use reference to request types
let response: Response = client
.responses()
.create_byot(&request).await?
Visit examples/borrow-instead-of-move to learn more.
To only use Rust types from the crate - disable default features and use feature flag types.
There are granular feature flags like response-types, chat-completion-types, etc.
These granular types are enabled when the corresponding API feature is enabled - for example response will enable response-types.
Certain individual APIs that need additional query or header parameters - these can be provided by chaining .query(), .header(), .headers() on the API group.
For example:
client.
.chat()
// query can be a struct or a map too.
.query(&[("limit", "10")])?
// header for demo
.header("key", "value")?
.list()
.await?
Use Config, OpenAIConfig etc. for configuring url, headers or query parameters globally for all requests.
Even though the scope of the crate is official OpenAI APIs, it is very configurable to work with compatible providers.
In addition to .query(), .header(), .headers() a path for individual request can be changed by using .path(), method on the API group.
For example:
client
.chat()
.path("/v1/messages")?
.create(request)
.await?
This allows you to use same code (say a fn) to call APIs on different OpenAI-compatible providers.
For any struct that implements Config trait, wrap it in a smart pointer and cast the pointer to dyn Config
trait object, then create a client with Box or Arc wrapped configuration.
For example:
use async_openai::{Client, config::{Config, OpenAIConfig}};
// Use `Box` or `std::sync::Arc` to wrap the config
let config = Box::new(OpenAIConfig::default()) as Box<dyn Config>;
// create client
let client: Client<Box<dyn Config>> = Client::with_config(config);
// A function can now accept a `&Client<Box<dyn Config>>` parameter
// which can invoke any openai compatible api
fn chat_completion(client: &Client<Box<dyn Config>>) {
todo!()
}
This repo will only accept issues and PRs related to WASM support. For other issues and PRs, please visit the original project async-openai.
async-openai-wasmBecause I wanted to develop and release a crate that depends on the wasm feature in experiments branch
of async-openai, but the pace of stabilizing the wasm feature is different
from what I expected.
The additional modifications are licensed under MIT license. The original project is also licensed under MIT license.