Crates.io | magicapi-ai-gateway |
lib.rs | magicapi-ai-gateway |
version | 0.2.0 |
source | src |
created_at | 2024-11-07 11:44:53.97391 |
updated_at | 2024-11-20 05:31:50.822826 |
description | A high-performance AI Gateway proxy for routing requests to various AI providers, offering seamless integration and management of multiple AI services |
homepage | https://magicapi.com |
repository | https://github.com/magicapi/ai-gateway |
max_upload_size | |
id | 1439692 |
size | 187,967 |
🚀 The world's fastest AI Gateway proxy, written in Rust and optimized for maximum performance. This high-performance API gateway routes requests to various AI providers (OpenAI, Anthropic, GROQ, Fireworks, Together, AWS Bedrock) with streaming support, making it perfect for developers who need reliable and blazing-fast AI API access.
You can install MagicAPI Gateway using one of these methods:
curl https://sh.rustup.rs -sSf | sh && cargo install magicapi-ai-gateway && magicapi-ai-gateway
cargo install magicapi-ai-gateway
After installation, you can start the gateway by running:
magicapi-ai-gateway
git clone https://github.com/magicapi/ai-gateway
cd ai-gateway
cargo build --release
cargo run --release
The server will start on http://127.0.0.1:3000
by default.
You can configure the gateway using environment variables:
# Basic configuration
export RUST_LOG=info
# Start the gateway
magicapi-ai-gateway
# Or with custom port
PORT=8080 magicapi-ai-gateway
To make requests through the gateway, use the /v1/*
endpoint and specify the provider using the x-provider
header.
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: bedrock" \
-H "x-aws-access-key-id: YOUR_ACCESS_KEY" \
-H "x-aws-secret-access-key: YOUR_SECRET_KEY" \
-H "x-aws-region: us-east-1" \
-d '{
"model": "anthropic.claude-3-sonnet-20240229-v1:0",
"messages": [{"role": "user", "content": "Hello!"}]
}'
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: openai" \
-H "Authorization: Bearer your-openai-api-key" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello!"}]
}'
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: groq" \
-H "Authorization: Bearer your-groq-api-key" \
-d '{
"model": "llama2-70b-4096",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true,
"max_tokens": 300
}'
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: anthropic" \
-H "Authorization: Bearer your-anthropic-api-key" \
-d '{
"model": "claude-3-5-sonnet-20241022",
"messages": [{"role": "user", "content": "Write a poem"}],
"stream": true,
"max_tokens": 1024
}'
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: fireworks" \
-H "Authorization: Bearer your-fireworks-api-key" \
-d '{
"model": "accounts/fireworks/models/llama-v3p1-8b-instruct",
"messages": [{"role": "user", "content": "Write a poem"}],
"stream": true,
"max_tokens": 300,
"temperature": 0.6,
"top_p": 1,
"top_k": 40
}'
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-provider: together" \
-H "Authorization: Bearer your-together-api-key" \
-d '{
"model": "meta-llama/Llama-2-7b-chat-hf",
"messages": [{"role": "user", "content": "Write a poem"}],
"stream": true,
"max_tokens": 512,
"temperature": 0.7,
"top_p": 0.7,
"top_k": 50,
"repetition_penalty": 1
}'
The MagicAPI AI Gateway is designed to work seamlessly with popular AI SDKs. You can use the official OpenAI SDK to interact with any supported provider by simply configuring the baseURL and adding the appropriate provider header.
import OpenAI from 'openai';
// Configure the SDK to use MagicAPI Gateway
const openai = new OpenAI({
apiKey: process.env.PROVIDER_API_KEY, // Use any provider's API key
baseURL: "http://localhost:3000/v1/", // Point to the gateway
defaultHeaders: {
"x-provider": "groq", // Specify the provider you want to use
},
});
// Make requests as usual
const chatCompletion = await openai.chat.completions.create({
messages: [
{ role: "system", content: "Write a poem" },
{ role: "user", content: "" }
],
model: "llama-3.1-8b-instant",
temperature: 1,
max_tokens: 100,
top_p: 1,
stream: false,
});
You can easily switch between providers by changing the x-provider
header and API key:
// For OpenAI
const openaiClient = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: "http://localhost:3000/v1/",
defaultHeaders: { "x-provider": "openai" },
});
// For AWS Bedrock
const bedrockClient = new OpenAI({
apiKey: process.env.AWS_ACCESS_KEY_ID, // Use AWS access key
baseURL: "http://localhost:3000/v1/",
defaultHeaders: {
"x-provider": "bedrock",
"x-aws-access-key-id": process.env.AWS_ACCESS_KEY_ID,
"x-aws-secret-access-key": process.env.AWS_SECRET_ACCESS_KEY,
"x-aws-region": process.env.AWS_REGION || "us-east-1"
},
});
// For Anthropic
const anthropicClient = new OpenAI({
apiKey: process.env.ANTHROPIC_API_KEY,
baseURL: "http://localhost:3000/v1/",
defaultHeaders: { "x-provider": "anthropic" },
});
// For GROQ
const groqClient = new OpenAI({
apiKey: process.env.GROQ_API_KEY,
baseURL: "http://localhost:3000/v1/",
defaultHeaders: { "x-provider": "groq" },
});
// For Fireworks
const fireworksClient = new OpenAI({
apiKey: process.env.FIREWORKS_API_KEY,
baseURL: "http://localhost:3000/v1/",
defaultHeaders: { "x-provider": "fireworks" },
});
// For Together AI
const togetherClient = new OpenAI({
apiKey: process.env.TOGETHER_API_KEY,
baseURL: "http://localhost:3000/v1/",
defaultHeaders: { "x-provider": "together" },
});
The gateway automatically handles the necessary transformations to ensure compatibility with each provider's API format while maintaining the familiar OpenAI SDK interface.
https://gateway.magicapi.dev
curl --location 'https://gateway.magicapi.dev/v1/chat/completions' \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--header 'x-provider: groq' \
--data '{
"model": "llama-3.1-8b-instant",
"messages": [
{
"role": "user",
"content": "Write a poem"
}
],
"stream": true,
"max_tokens": 300
}'
Note: This deployment is provided for testing and evaluation purposes only. For production workloads, please deploy your own instance of the gateway or contact us for information about production-ready managed solutions.
The gateway can be configured using environment variables:
RUST_LOG=debug # Logging level (debug, info, warn, error)
The gateway leverages the best-in-class Rust ecosystem:
MagicAPI Developer AI Gateway is designed for maximum performance:
We welcome contributions! Please see our CONTRIBUTING.md for guidelines.
# Install development dependencies
cargo install cargo-watch
# Run tests
cargo test
# Run with hot reload
cargo watch -x run
Connection Refused
Streaming Not Working
Accept: text/event-stream
header is setProvider Errors
Special thanks to all contributors and the Rust community.
This project is dual-licensed under both the MIT License and the Apache License (Version 2.0). You may choose either license at your option. See the LICENSE-MIT and LICENSE-APACHE files for details.
docker buildx build --platform linux/amd64 -t magicapi1/magicapi-ai-gateway:latest . --load
docker push magicapi1/magicapi-ai-gateway:latest
docker run -p 3000:3000 \
-e RUST_LOG=info \
magicapi1/magicapi-ai-gateway:latest
docker pull magicapi1/magicapi-ai-gateway:latest
docker run -p 3000:3000 \
-e RUST_LOG=info \
magicapi1/magicapi-ai-gateway:latest
For detailed deployment instructions, please refer to the Deployment Guide.
Create a docker-compose.yml
file:
version: '3.8'
services:
gateway:
build: .
platform: linux/amd64
ports:
- "3000:3000"
environment:
- RUST_LOG=info
restart: unless-stopped
Create a docker-compose.yml
file:
version: '3.8'
services:
gateway:
image: magicapi1/magicapi-ai-gateway:latest
platform: linux/amd64
ports:
- "3000:3000"
environment:
- RUST_LOG=info
restart: unless-stopped
Then run either option with:
docker-compose up -d
Cargo.toml
cargo test
cargo build --release
cargo clippy
to check for any linting issuescargo fmt
to ensure consistent formatting# Create and switch to a release branch
git checkout -b release/v0.1.6
# Stage and commit changes
git add Cargo.toml CHANGELOG.md
git commit -m "chore: release v0.1.6"
# Create a git tag
git tag -a v0.1.7 -m "Release v0.1.7"
# Push changes and tag
git push origin release/v0.1.7
git push origin v0.1.7
# Verify the package contents
cargo package
# Publish to crates.io (requires authentication)
cargo publish
Create a GitHub release (if using GitHub)
Merge the release branch back to main
git checkout main
git merge release/v0.1.7
git push origin main
After publishing, verify:
MagicAPI provides a testing deployment of the AI Gateway, hosted in our London data centre. This deployment is intended for testing and evaluation purposes only, and should not be used for production workloads.