| Crates.io | maple-proxy |
| lib.rs | maple-proxy |
| version | 0.1.2 |
| created_at | 2025-08-29 16:16:25.483124+00 |
| updated_at | 2025-09-12 18:30:56.924062+00 |
| description | Lightweight OpenAI-compatible proxy server for Maple/OpenSecret TEE infrastructure |
| homepage | https://github.com/OpenSecret/maple-proxy |
| repository | https://github.com/OpenSecret/maple-proxy |
| max_upload_size | |
| id | 1816099 |
| size | 150,228 |
A lightweight OpenAI-compatible proxy server for Maple/OpenSecret's TEE infrastructure. Works with any OpenAI client library while providing the security and privacy benefits of Trusted Execution Environment (TEE) processing.
git clone <repository>
cd maple-proxy
cargo build --release
Add to your Cargo.toml:
[dependencies]
maple-proxy = { git = "https://github.com/opensecretcloud/maple-proxy" }
# Or if published to crates.io:
# maple-proxy = "0.1.0"
Set environment variables or use command-line arguments:
# Environment Variables
export MAPLE_HOST=127.0.0.1 # Server host (default: 127.0.0.1)
export MAPLE_PORT=3000 # Server port (default: 3000)
export MAPLE_BACKEND_URL=http://localhost:3000 # Maple backend URL (prod: https://enclave.trymaple.ai)
export MAPLE_API_KEY=your-maple-api-key # Default API key (optional)
export MAPLE_DEBUG=true # Enable debug logging
export MAPLE_ENABLE_CORS=true # Enable CORS
Or use CLI arguments:
cargo run -- --host 0.0.0.0 --port 8080 --backend-url https://enclave.trymaple.ai
cargo run
You should see:
π Maple Proxy Server started successfully!
π Available endpoints:
GET /health - Health check
GET /v1/models - List available models
POST /v1/chat/completions - Create chat completions (streaming)
curl http://localhost:8080/v1/models \
-H "Authorization: Bearer YOUR_MAPLE_API_KEY"
curl -N http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer YOUR_MAPLE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "llama3-3-70b",
"messages": [
{"role": "user", "content": "Write a haiku about technology"}
],
"stream": true
}'
Note: Maple currently only supports streaming responses.
You can also embed Maple Proxy in your own Rust application:
use maple_proxy::{Config, create_app};
use tokio::net::TcpListener;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize tracing
tracing_subscriber::fmt::init();
// Create config programmatically
let config = Config::new(
"127.0.0.1".to_string(),
8081, // Custom port
"https://enclave.trymaple.ai".to_string(),
)
.with_api_key("your-api-key-here".to_string())
.with_debug(true)
.with_cors(true);
// Create the app
let app = create_app(config.clone());
// Start the server
let addr = config.socket_addr()?;
let listener = TcpListener::bind(addr).await?;
println!("Maple proxy server running on http://{}", addr);
axum::serve(listener, app).await?;
Ok(())
}
Run the example:
cargo run --example library_usage
import openai
client = openai.OpenAI(
api_key="YOUR_MAPLE_API_KEY",
base_url="http://localhost:8080/v1"
)
# Streaming chat completion
stream = client.chat.completions.create(
model="llama3-3-70b",
messages=[{"role": "user", "content": "Hello, world!"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'YOUR_MAPLE_API_KEY',
baseURL: 'http://localhost:8080/v1',
});
const stream = await openai.chat.completions.create({
model: 'llama3-3-70b',
messages: [{ role: 'user', content: 'Hello!' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
# Health check
curl http://localhost:8080/health
# List models
curl http://localhost:8080/v1/models \
-H "Authorization: Bearer YOUR_MAPLE_API_KEY"
# Streaming chat completion
curl -N http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer YOUR_MAPLE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "llama3-3-70b",
"messages": [{"role": "user", "content": "Tell me a joke"}],
"stream": true
}'
Maple Proxy supports two authentication methods:
Set MAPLE_API_KEY - all requests will use this key by default:
export MAPLE_API_KEY=your-maple-api-key
cargo run
Override the default key or provide one if not set:
curl -H "Authorization: Bearer different-api-key" ...
Enable CORS for web applications:
export MAPLE_ENABLE_CORS=true
cargo run
Pull and run the official image from GitHub Container Registry:
# Pull the latest image
docker pull ghcr.io/opensecretcloud/maple-proxy:latest
# Run with your API key
docker run -p 8080:8080 \
-e MAPLE_BACKEND_URL=https://enclave.trymaple.ai \
ghcr.io/opensecretcloud/maple-proxy:latest
# Build the image locally
just docker-build
# Run the container
just docker-run
# In your docker-compose.yml, use:
image: ghcr.io/opensecretcloud/maple-proxy:latest
docker build -t maple-proxy:latest .
# Copy the example environment file
cp .env.example .env
# Edit .env with your configuration
vim .env
# Start the service
docker-compose up -d
When deploying Maple Proxy on a public network:
MAPLE_API_KEY in the container environment# Client-side authentication for public proxy
client = OpenAI(
base_url="https://your-proxy.example.com/v1",
api_key="user-specific-maple-api-key" # Each user provides their own key
)
This ensures:
# Build image
just docker-build
# Run interactively
just docker-run
# Run in background
just docker-run-detached
# View logs
just docker-logs
# Stop container
just docker-stop
# Use docker-compose
just compose-up
just compose-logs
just compose-down
The Docker image:
# docker-compose.yml environment section
environment:
- MAPLE_BACKEND_URL=https://enclave.trymaple.ai # Production backend
- MAPLE_ENABLE_CORS=true # Enable for web apps
- RUST_LOG=info # Logging level
# - MAPLE_API_KEY=xxx # Only for private deployments!
Automated Builds (GitHub Actions)
master automatically builds and publishes to ghcr.io/opensecretcloud/maple-proxy:latestv1.0.0) trigger versioned releasesLocal Development (Justfile)
# For local testing and debugging
just docker-build # Build locally
just docker-run # Test locally
just ghcr-push v1.2.3 # Manual push (requires login)
Use GitHub Actions for production releases, Justfile for local development.
cargo build
export MAPLE_DEBUG=true
cargo run
cargo test
Maple Proxy supports all models available in the Maple/OpenSecret platform, including:
llama3-3-70b - Llama 3.3 70B parameter model/v1/models endpoint for current list"No API key provided"
MAPLE_API_KEY environment variable or provide Authorization: Bearer <key> header"Failed to establish secure connection"
MAPLE_BACKEND_URL is correctConnection refused
Enable debug logging for detailed information:
export MAPLE_DEBUG=true
cargo run
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β OpenAI Client βββββΆβ Maple Proxy βββββΆβ Maple Backend β
β (Python/JS) β β (localhost) β β (TEE) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
MIT License - see LICENSE file for details.
Contributions welcome! Please feel free to submit a Pull Request.