| Crates.io | openai-agents-rust |
| lib.rs | openai-agents-rust |
| version | 0.1.0 |
| created_at | 2025-08-21 09:29:35.703258+00 |
| updated_at | 2025-08-21 09:29:35.703258+00 |
| description | Harmony-aligned, OpenAI-compatible agent orchestration in Rust with tools, realtime, and voice. |
| homepage | https://github.com/MaxParisotto/openai-agents-rust |
| repository | https://github.com/MaxParisotto/openai-agents-rust |
| max_upload_size | |
| id | 1804495 |
| size | 188,826 |
Harmony-aligned, OpenAI-compatible agent orchestration in Rust — no mocks, no automatic fallbacks.
This crate provides a library and CLI to build agents that call OpenAI-compatible models (cloud or OSS) with robust tool orchestration, optional realtime/voice support, and an environment-first configuration model. It aims for practical parity with the Python SDK while staying idiomatic in Rust.
# Local OSS (e.g., vLLM / OpenAI-compatible)
OPENAI_BASE_URL=http://localhost:8000/v1
OPENAI_MODEL=openai/gpt-oss-120b
# Optional if your server requires auth
# OPENAI_API_KEY=sk-...
# Logging
RUST_LOG=info
cargo test -q
cargo run
By default, the MCP server binds to http://127.0.0.1:8080 and the runtime registers:
The config loader supports both file-based and environment-based configuration.
OPENAI_AGENTS_CONFIG (default: ./config.yaml). The loader also reads variables with prefix OPENAI_AGENTS__ to override file keys.OPENAI_BASE_URL (required)OPENAI_MODEL (required)OPENAI_API_KEY (optional)RUST_LOG (optional)Schema (src/config/schema.rs):
api_key: String (optional)model: String (required)base_url: String (required)log_level: Stringplugins_path: PathBuf (defaults to ~/.config/openai_agents/plugins)max_concurrent_requests: Option<usize>Important policy: base_url is required and there are no provider defaults baked into the loader. If a value is missing, it stays empty and you’ll see a clear error where used.
OPENAI_AGENTS__BASE_URL, OPENAI_AGENTS__MODEL, etc. map onto file keys when using a config file.OPENAI_BASE_URL, OPENAI_MODEL, OPENAI_API_KEY, RUST_LOG always overlay the active config at runtime./chat/completions): src/model/openai_chat.rs
tool_calls and legacy function_call./responses): src/model/gpt_oss_responses.rssrc/model/litellm.rs (for aggregating providers behind a single base_url)Common env toggles for OpenAI-compatible servers (especially vLLM):
VLLM_MIN_PAYLOAD (bool): minimal payload (model+messages only).VLLM_FORCE_FUNCTIONS (bool): send legacy functions/function_call instead of tools.VLLM_DISABLE_PARALLEL_TOOL_CALLS (bool): don’t send parallel_tool_calls: true.VLLM_TOOL_CHOICE (string): one of auto, none, object:auto, object:none.VLLM_DISABLE_TOOLS_IN_LLM (bool): don’t pass tool specs to the LLM from the runner.VLLM_DEBUG_PAYLOAD (bool): pretty-print request JSON at debug level.The Runner orchestrates tool execution and model turns.
RunLlmAgain, StopOnFirstTool, StopAtTools([..]), or a custom decider.tool messages with tool_call_id when available.Tools live under src/tools/ with a simple Tool trait and a registry for discovery. Tools can optionally expose an OpenAI tool spec so the model can call them via tool_calls.
src/model/openai_realtime.rs and src/realtime.rs provide building blocks for SSE streaming flows.src/voice/ includes STT (audio/transcriptions) and TTS (audio/speech) clients, with a pipeline for simple voice interactions.Plugins are dynamically loadable and initialized at runtime:
src/plugin/loader.rs and src/plugin/mod.rs.~/.config/openai_agents/plugins (configurable via plugins_path).What’s implemented:
tool_calls and legacy function_call; optional tool-choice; parallel tool-calls control.Partial/roadmap:
Build, test, and run locally:
cargo test -q
cargo run
Optional config file example (config.yaml):
base_url: "http://localhost:8000/v1"
model: "openai/gpt-oss-120b"
log_level: "info"
plugins_path: "~/.config/openai_agents/plugins"
You can override any file keys with OPENAI_AGENTS__<KEY> (double underscore maps to nested keys) or use the global provider-style variables documented above.
src/
lib.rs # crate exports
main.rs # CLI: loads config, starts MCP server, runs agents
agent/ # Agent traits, runtime, runner
model/ # Models: OpenAI Chat, OSS Responses, LiteLLM, Realtime
tools/ # Tool trait + registry + function tools
plugin/ # Plugin loader/registry
config/ # Config schema + loader (env-first)
realtime/, voice/ # Streaming + STT/TTS
MIT — see LICENSE for details. Contributions welcome.
See also: CONTRIBUTING.md, SECURITY.md, and CHANGELOG.md.