| Crates.io | objectiveai-api |
| lib.rs | objectiveai-api |
| version | 0.1.3 |
| created_at | 2026-01-23 19:43:36.280409+00 |
| updated_at | 2026-01-23 19:43:36.280409+00 |
| description | ObjectiveAI API Server |
| homepage | https://objective-ai.io |
| repository | https://github.com/ObjectiveAI/objectiveai |
| max_upload_size | |
| id | 2065358 |
| size | 479,535 |
Score everything. Rank everything. Simulate anyone.
A self-hostable API server for ObjectiveAI - run the full ObjectiveAI platform locally or use the library to build your own custom server.
Website | API | GitHub | Discord
This crate provides two ways to use the ObjectiveAI API:
# Clone the repository
git clone https://github.com/ObjectiveAI/objectiveai
cd objectiveai/objectiveai-api
# Create a .env file
cat > .env << EOF
OPENROUTER_API_KEY=sk-or-...
OBJECTIVEAI_API_KEY=oai-... # Optional
EOF
# Run the server
cargo run --release
The server starts on http://localhost:5000 by default.
| Variable | Default | Description |
|---|---|---|
OPENROUTER_API_KEY |
(required) | Your OpenRouter API key |
OBJECTIVEAI_API_KEY |
(optional) | ObjectiveAI API key for caching and remote Functions |
OBJECTIVEAI_API_BASE |
https://api.objective-ai.io |
ObjectiveAI API base URL |
OPENROUTER_API_BASE |
https://openrouter.ai/api/v1 |
OpenRouter API base URL |
ADDRESS |
0.0.0.0 |
Server bind address |
PORT |
5000 |
Server port |
USER_AGENT |
(optional) | User agent for upstream requests |
HTTP_REFERER |
(optional) | HTTP referer for upstream requests |
X_TITLE |
(optional) | X-Title header for upstream requests |
| Variable | Default | Description |
|---|---|---|
CHAT_COMPLETIONS_BACKOFF_INITIAL_INTERVAL |
100 |
Initial retry interval (ms) |
CHAT_COMPLETIONS_BACKOFF_MAX_INTERVAL |
1000 |
Maximum retry interval (ms) |
CHAT_COMPLETIONS_BACKOFF_MAX_ELAPSED_TIME |
40000 |
Maximum total retry time (ms) |
CHAT_COMPLETIONS_BACKOFF_MULTIPLIER |
1.5 |
Backoff multiplier |
CHAT_COMPLETIONS_BACKOFF_RANDOMIZATION_FACTOR |
0.5 |
Randomization factor |
Add to your Cargo.toml:
[dependencies]
objectiveai-api = "0.1.0"
use objectiveai_api::{chat, ctx, vector, functions, ensemble, ensemble_llm};
use std::sync::Arc;
// Create your HTTP client
let http_client = reqwest::Client::new();
// Create the ObjectiveAI HTTP client
let objectiveai_client = Arc::new(objectiveai::HttpClient::new(
http_client.clone(),
Some("https://api.objective-ai.io".to_string()),
Some("apk...".to_string()),
None, None, None,
));
// Build the component stack
let ensemble_llm_fetcher = Arc::new(
ensemble_llm::fetcher::CachingFetcher::new(Arc::new(
ensemble_llm::fetcher::ObjectiveAiFetcher::new(objectiveai_client.clone()),
)),
);
let chat_client = Arc::new(chat::completions::Client::new(
ensemble_llm_fetcher.clone(),
Arc::new(chat::completions::usage_handler::LogUsageHandler),
// ... upstream client configuration
));
// Use in your own Axum/Actix/Warp routes
| Module | Description |
|---|---|
auth |
Authentication and API key management |
chat |
Chat completions with Ensemble LLMs |
vector |
Vector completions for scoring and ranking |
functions |
Function execution and Profile management |
ensemble |
Ensemble management and caching |
ensemble_llm |
Ensemble LLM management and caching |
ctx |
Request context for dependency injection |
error |
Error response handling |
util |
Utilities for streaming and indexing |
Request
│
▼
┌─────────────────────────────────────────────────┐
│ Functions Client │
│ - Executes Function pipelines │
│ - Handles Profile weights │
└─────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Vector Completions Client │
│ - Runs ensemble voting │
│ - Combines votes into scores │
└─────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Chat Completions Client │
│ - Sends prompts to individual LLMs │
│ - Handles retries and backoff │
└─────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Upstream Client (OpenRouter) │
│ - Actual LLM API calls │
└─────────────────────────────────────────────────┘
Each layer uses traits for dependency injection:
POST /chat/completions - Create chat completionPOST /vector/completions - Create vector completionPOST /vector/completions/{id} - Get completion votesPOST /vector/completions/cache - Get cached voteGET /functions - List functionsGET /functions/{owner}/{repo} - Get functionPOST /functions/{owner}/{repo} - Execute remote function with inline profileGET /functions/profiles - List profilesGET /functions/profiles/{owner}/{repo} - Get profilePOST /functions/{owner}/{repo}/profiles/{owner}/{repo} - Execute remote function with remote profilePOST /functions/profiles/compute - Train a profileGET /ensembles - List ensemblesGET /ensembles/{id} - Get ensembleMIT