| Crates.io | magicapi-ai-gateway |
| lib.rs | magicapi-ai-gateway |
| version | 1.0.0 |
| created_at | 2024-11-07 11:44:53.97391+00 |
| updated_at | 2025-03-06 08:37:52.651964+00 |
| description | [DEPRECATED] This package has been renamed to 'noveum-ai-gateway'. Please use the new package for all future development. A high-performance AI Gateway proxy for routing requests to various AI providers, offering seamless integration and management of multiple AI providers. |
| homepage | https://noveum.ai |
| repository | https://github.com/noveum/ai-gateway |
| max_upload_size | |
| id | 1439692 |
| size | 386,884 |
IMPORTANT: This package (
magicapi-ai-gateway) has been renamed and moved tonoveum-ai-gateway. Please use the new package for all future development.This package is no longer maintained and will not receive updates. All new features, bug fixes, and improvements will be made to the new package.
🚀 The world's fastest AI Gateway proxy, written in Rust and optimized for maximum performance. This high-performance API gateway routes requests to various AI providers (OpenAI, Anthropic, GROQ, Fireworks, Together, AWS Bedrock) with streaming support, making it perfect for developers who need reliable and blazing-fast AI API access.
This package has been renamed from magicapi-ai-gateway to noveum-ai-gateway. Please update your dependencies to use the new package name:
# Old package (deprecated)
# cargo install magicapi-ai-gateway
# New package (use this instead)
cargo install noveum-ai-gateway
For the latest documentation and updates, please visit:
Quick Start • Documentation • Monitoring • Docker • Contributing
You can install Noveum Gateway using one of these methods:
curl https://sh.rustup.rs -sSf | sh && cargo install noveum-ai-gateway && noveum-ai-gateway
cargo install noveum-ai-gateway
After installation, you can start the gateway by running:
noveum-ai-gateway
git clone https://github.com/noveum/ai-gateway
cd ai-gateway
cargo build --release
cargo run --release
The server will start on http://127.0.0.1:3000 by default.
You can configure the gateway using environment variables:
# Basic configuration
export RUST_LOG=info
# Start the gateway
noveum-ai-gateway
# Or with custom port
PORT=8080 noveum-ai-gateway
To make requests through the gateway, use the /v1/* endpoint and specify the provider using the x-provider header.
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: bedrock" \
-H "x-aws-access-key-id: YOUR_ACCESS_KEY" \
-H "x-aws-secret-access-key: YOUR_SECRET_KEY" \
-H "x-aws-region: us-east-1" \
-d '{
"model": "anthropic.claude-3-sonnet-20240229-v1:0",
"messages": [{"role": "user", "content": "Hello!"}]
}'
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: openai" \
-H "Authorization: Bearer your-openai-api-key" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello!"}]
}'
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: groq" \
-H "Authorization: Bearer your-groq-api-key" \
-d '{
"model": "llama2-70b-4096",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true,
"max_tokens": 300
}'
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: anthropic" \
-H "Authorization: Bearer your-anthropic-api-key" \
-d '{
"model": "claude-3-5-sonnet-20241022",
"messages": [{"role": "user", "content": "Write a poem"}],
"stream": true,
"max_tokens": 1024
}'
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: fireworks" \
-H "Authorization: Bearer your-fireworks-api-key" \
-d '{
"model": "accounts/fireworks/models/llama-v3p1-8b-instruct",
"messages": [{"role": "user", "content": "Write a poem"}],
"stream": true,
"max_tokens": 300,
"temperature": 0.6,
"top_p": 1,
"top_k": 40
}'
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: together" \
-H "Authorization: Bearer your-together-api-key" \
-d '{
"model": "meta-llama/Llama-2-7b-chat-hf",
"messages": [{"role": "user", "content": "Write a poem"}],
"stream": true,
"max_tokens": 512,
"temperature": 0.7,
"top_p": 0.7,
"top_k": 50,
"repetition_penalty": 1
}'
The Noveum AI Gateway is designed to work seamlessly with popular AI SDKs. You can use the official OpenAI SDK to interact with any supported provider by simply configuring the baseURL and adding the appropriate provider header.
import OpenAI from 'openai';
// Configure the SDK to use Noveum Gateway
const openai = new OpenAI({
apiKey: process.env.PROVIDER_API_KEY, // Use any provider's API key
baseURL: "http://localhost:3000/v1/", // Point to the gateway
defaultHeaders: {
"x-provider": "groq", // Specify the provider you want to use
},
});
// Make requests as usual
const chatCompletion = await openai.chat.completions.create({
messages: [
{ role: "system", content: "Write a poem" },
{ role: "user", content: "" }
],
model: "llama-3.1-8b-instant",
temperature: 1,
max_tokens: 100,
top_p: 1,
stream: false,
});
You can easily switch between providers by changing the x-provider header and API key:
// For OpenAI
const openaiClient = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: "http://localhost:3000/v1/",
defaultHeaders: { "x-provider": "openai" },
});
// For AWS Bedrock
const bedrockClient = new OpenAI({
apiKey: process.env.AWS_ACCESS_KEY_ID, // Use AWS access key
baseURL: "http://localhost:3000/v1/",
defaultHeaders: {
"x-provider": "bedrock",
"x-aws-access-key-id": process.env.AWS_ACCESS_KEY_ID,
"x-aws-secret-access-key": process.env.AWS_SECRET_ACCESS_KEY,
"x-aws-region": process.env.AWS_REGION || "us-east-1"
},
});
// For Anthropic
const anthropicClient = new OpenAI({
apiKey: process.env.ANTHROPIC_API_KEY,
baseURL: "http://localhost:3000/v1/",
defaultHeaders: { "x-provider": "anthropic" },
});
// For GROQ
const groqClient = new OpenAI({
apiKey: process.env.GROQ_API_KEY,
baseURL: "http://localhost:3000/v1/",
defaultHeaders: { "x-provider": "groq" },
});
// For Fireworks
const fireworksClient = new OpenAI({
apiKey: process.env.FIREWORKS_API_KEY,
baseURL: "http://localhost:3000/v1/",
defaultHeaders: { "x-provider": "fireworks" },
});
// For Together AI
const togetherClient = new OpenAI({
apiKey: process.env.TOGETHER_API_KEY,
baseURL: "http://localhost:3000/v1/",
defaultHeaders: { "x-provider": "together" },
});
The gateway automatically handles the necessary transformations to ensure compatibility with each provider's API format while maintaining the familiar OpenAI SDK interface.
https://gate.noveum.ai
curl --location 'https://gate.noveum.ai/v1/chat/completions' \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--header 'x-provider: groq' \
--data '{
"model": "llama-3.1-8b-instant",
"messages": [
{
"role": "user",
"content": "Write a poem"
}
],
"stream": true,
"max_tokens": 300
}'
Note: This deployment is provided for testing and evaluation purposes only. For production workloads, please deploy your own instance of the gateway or contact us for information about production-ready managed solutions.
The gateway can be configured using environment variables:
RUST_LOG=debug # Logging level (debug, info, warn, error)
The gateway leverages the best-in-class Rust ecosystem:
Noveum Developer AI Gateway is designed for maximum performance:
We welcome contributions! Please see our CONTRIBUTING.md for guidelines.
# Install development dependencies
cargo install cargo-watch
# Run tests
cargo test
# Run with hot reload
cargo watch -x run
Connection Refused
Streaming Not Working
Accept: text/event-stream header is setProvider Errors
Special thanks to all contributors and the Rust community.
This project is dual-licensed under both the MIT License and the Apache License (Version 2.0). You may choose either license at your option. See the LICENSE-MIT and LICENSE-APACHE files for details.
docker buildx build --platform linux/amd64 -t noveum/noveum-ai-gateway:latest . --load
docker push noveum/noveum-ai-gateway:latest
docker run -p 3000:3000 \
-e RUST_LOG=info \
noveum/noveum-ai-gateway:latest
docker pull noveum/noveum-ai-gateway:latest
docker run -p 3000:3000 \
-e RUST_LOG=info \
noveum/noveum-ai-gateway:latest
For detailed deployment instructions, please refer to the Deployment Guide.
Create a docker-compose.yml file:
version: '3.8'
services:
gateway:
build: .
platform: linux/amd64
ports:
- "3000:3000"
environment:
- RUST_LOG=info
restart: unless-stopped
Create a docker-compose.yml file:
version: '3.8'
services:
gateway:
image: noveum/noveum-ai-gateway:latest
platform: linux/amd64
ports:
- "3000:3000"
environment:
- RUST_LOG=info
restart: unless-stopped
Then run either option with:
docker-compose up -d
Cargo.tomlcargo testcargo build --releasecargo clippy to check for any linting issuescargo fmt to ensure consistent formatting# Create and switch to a release branch
git checkout -b release/v0.1.6
# Stage and commit changes
git add Cargo.toml CHANGELOG.md
git commit -m "chore: release v0.1.6"
# Create a git tag
git tag -a v0.1.7 -m "Release v0.1.7"
# Push changes and tag
git push origin release/v0.1.7
git push origin v0.1.7
# Verify the package contents
cargo package
# Publish to crates.io (requires authentication)
cargo publish
Create a GitHub release (if using GitHub)
Merge the release branch back to main
git checkout main
git merge release/v0.1.7
git push origin main
After publishing, verify:
Noveum provides a testing deployment of the AI Gateway, hosted in our London data centre. This deployment is intended for testing and evaluation purposes only, and should not be used for production workloads.
Noveum AI Gateway now supports OpenTelemetry compatible logs for enhanced observability. The Gateway can export detailed request logs with a rich structured format that includes complete request/response details and performance metrics.
You can add custom tracking information to your requests that will be included in the logs:
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: openai" \
-H "Authorization: Bearer your-openai-api-key" \
-H "x-project-id: your-project-id" \
-H "x-organisation-id: your-org-id" \
-H "x-user-id: your-user-id" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello!"}]
}'
These headers will be included in the telemetry logs, allowing you to:
For more details, see the Elasticsearch Integration Guide and Telemetry Plugins Guide.
Noveum Gateway includes comprehensive integration tests for all supported providers (OpenAI, Anthropic, GROQ, Fireworks, Together AI, and AWS Bedrock). These tests validate both non-streaming and streaming functionality.
Set up your test environment:
# Copy the sample test environment file
cp tests/.env.test.example .env.test
# Edit the file to add your API keys for the providers you want to test
nano .env.test
Start the gateway with ElasticSearch enabled:
ENABLE_ELASTICSEARCH=true cargo run
Run the integration tests:
# Run all tests
cargo test --test run_integration_tests -- --nocapture
# Run tests for specific providers
cargo test --test run_integration_tests openai -- --nocapture
cargo test --test run_integration_tests anthropic -- --nocapture
cargo test --test run_integration_tests groq -- --nocapture
cargo test --test run_integration_tests fireworks -- --nocapture
cargo test --test run_integration_tests together -- --nocapture
Your .env.test file should include the following variables:
# Gateway URL (default: http://localhost:3000)
GATEWAY_URL=http://localhost:3000
# ElasticSearch Configuration (required for tests)
ELASTICSEARCH_URL=http://localhost:9200
ELASTICSEARCH_USERNAME=elastic
ELASTICSEARCH_PASSWORD=your_elasticsearch_password
ELASTICSEARCH_INDEX=ai-gateway-metrics
# Provider API Keys - Add keys for the providers you want to test
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
GROQ_API_KEY=your_groq_api_key
FIREWORKS_API_KEY=your_fireworks_api_key
TOGETHER_API_KEY=your_together_api_key
# AWS Bedrock Credentials
AWS_ACCESS_KEY_ID=your_aws_access_key_id
AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key
AWS_REGION=us-east-1
For detailed test documentation, please refer to the Integration Tests README.
This package (magicapi-ai-gateway) is deprecated and has been moved to noveum-ai-gateway. Please update your dependencies to use the new package for all future development.
# Install the new package
cargo install noveum-ai-gateway
All future updates, bug fixes, and new features will only be available in the new package.