| Crates.io | llm-optimizer-cli |
| lib.rs | llm-optimizer-cli |
| version | 0.1.1 |
| created_at | 2025-11-11 02:55:12.066655+00 |
| updated_at | 2025-11-11 02:55:12.066655+00 |
| description | Beautiful CLI tool with 40+ commands |
| homepage | https://github.com/globalbusinessadvisors/llm-auto-optimizer |
| repository | https://github.com/globalbusinessadvisors/llm-auto-optimizer |
| max_upload_size | |
| id | 1926641 |
| size | 229,165 |
Production-grade command-line interface for managing LLM Auto Optimizer.
cargo install --path crates/cli
cargo install llm-optimizer-cli
llm-optimizer init --api-url http://localhost:8080
This creates a configuration file at ~/.config/llm-optimizer/config.yaml.
llm-optimizer doctor
llm-optimizer optimize create \
--services my-llm-service \
--strategy cost-performance-scoring
llm-optimizer metrics performance
The CLI looks for configuration in the following locations:
~/.config/llm-optimizer/config.yaml (Linux/macOS)%APPDATA%\llm-optimizer\config.yaml (Windows)--config flagExample configuration:
api_url: http://localhost:8080
grpc_endpoint: http://localhost:50051
api_key: your-api-key-here
timeout: 30
output_format: table
verbose: false
LLM_OPTIMIZER_API_URL: API base URLLLM_OPTIMIZER_API_KEY: API authentication keyGlobal flags available for all commands:
--api-url <URL>: Override API URL--api-key <KEY>: Override API key--output <FORMAT>: Set output format (table, json, yaml, csv)--verbose: Enable verbose logging--config <FILE>: Specify configuration file--timeout <SECONDS>: Request timeoutManage the LLM Auto Optimizer service.
llm-optimizer service start
llm-optimizer service stop
llm-optimizer service restart
llm-optimizer service status
# Show last 100 lines
llm-optimizer service logs
# Show last N lines
llm-optimizer service logs -n 50
# Follow logs (stream)
llm-optimizer service logs --follow
Create and manage LLM optimizations.
# Basic creation
llm-optimizer optimize create \
--services service1,service2 \
--strategy cost-performance-scoring
# With dry run
llm-optimizer optimize create \
--services my-service \
--strategy aggressive-cost-reduction \
--dry-run
# Interactive mode
llm-optimizer optimize create --interactive
Available strategies:
cost-performance-scoring: Balanced cost and performancequality-preserving: Minimize cost while maintaining qualityaggressive-cost-reduction: Maximum cost reductionbalanced: Default balanced approach# List all
llm-optimizer optimize list
# Filter by status
llm-optimizer optimize list --status deployed
# Filter by strategy
llm-optimizer optimize list --strategy cost-performance-scoring
# Filter by service
llm-optimizer optimize list --service my-service
# Date range
llm-optimizer optimize list --from 2024-01-01 --to 2024-01-31
# JSON output
llm-optimizer optimize list --output json
llm-optimizer optimize get <optimization-id>
# Standard deployment
llm-optimizer optimize deploy <optimization-id>
# Gradual rollout (10%)
llm-optimizer optimize deploy <optimization-id> --gradual --percentage 10
# Skip confirmation
llm-optimizer optimize deploy <optimization-id> --yes
# With interactive reason prompt
llm-optimizer optimize rollback <optimization-id>
# With reason provided
llm-optimizer optimize rollback <optimization-id> \
--reason "Performance regression detected"
# Skip confirmation
llm-optimizer optimize rollback <optimization-id> --yes
llm-optimizer optimize cancel <optimization-id>
Manage system configuration.
llm-optimizer config get max_optimization_requests
llm-optimizer config set max_optimization_requests '{"value": 100}'
llm-optimizer config list
llm-optimizer config validate
# Print to stdout
llm-optimizer config export
# Save to file
llm-optimizer config export --output config-backup.yaml
llm-optimizer config import config.yaml
Query metrics and view analytics.
llm-optimizer metrics query \
--metrics latency,cost,quality \
--from 2024-01-01 \
--to 2024-01-31 \
--aggregation avg
# All services
llm-optimizer metrics performance
# Specific service
llm-optimizer metrics performance --service my-service
# Date range
llm-optimizer metrics performance --from 2024-01-01 --to 2024-01-31
Output includes:
llm-optimizer metrics cost
# With filters
llm-optimizer metrics cost --service my-service --from 2024-01-01
Output includes:
llm-optimizer metrics quality
# With filters
llm-optimizer metrics quality --service my-service
Output includes:
# Export as CSV
llm-optimizer metrics export --format csv --output metrics.csv
# Export as JSON
llm-optimizer metrics export --format json --output metrics.json
# Date range
llm-optimizer metrics export --format csv --from 2024-01-01 --to 2024-01-31
Manage external integrations.
llm-optimizer integration add \
--integration-type prometheus \
--name "Production Prometheus" \
--config '{"url": "http://prometheus:9090", "scrape_interval": "15s"}'
Supported integration types:
prometheus: Prometheus monitoringdatadog: Datadog monitoringgrafana: Grafana dashboardsslack: Slack notificationspagerduty: PagerDuty alertingwebhook: Custom webhooksllm-optimizer integration list
llm-optimizer integration test <integration-id>
# With confirmation
llm-optimizer integration remove <integration-id>
# Skip confirmation
llm-optimizer integration remove <integration-id> --yes
Administrative operations and system management.
llm-optimizer admin stats
Shows:
# With confirmation
llm-optimizer admin cache
# Skip confirmation
llm-optimizer admin cache --yes
llm-optimizer admin health
Shows health status of all system components.
llm-optimizer admin version
Shows:
Utility and helper commands.
# Basic initialization
llm-optimizer init
# With API URL
llm-optimizer init --api-url http://production.example.com:8080
# With API key
llm-optimizer init --api-key your-api-key
# Force overwrite
llm-optimizer init --force
# Bash
llm-optimizer completions bash > ~/.local/share/bash-completion/completions/llm-optimizer
# Zsh
llm-optimizer completions zsh > ~/.zfunc/_llm-optimizer
# Fish
llm-optimizer completions fish > ~/.config/fish/completions/llm-optimizer.fish
llm-optimizer doctor
Checks:
llm-optimizer interactive
Launches an interactive menu-driven interface.
The CLI supports multiple output formats:
llm-optimizer optimize list
Beautiful ASCII tables with colored output.
llm-optimizer optimize list --output json
Pretty-printed JSON for programmatic processing.
llm-optimizer optimize list --output yaml
YAML format for configuration files.
llm-optimizer metrics export --format csv
CSV format for spreadsheet applications.
# Create optimization
OPT_ID=$(llm-optimizer optimize create \
--services production-gpt4 \
--strategy cost-performance-scoring \
--output json | jq -r '.id')
# Wait for analysis
sleep 5
# Review details
llm-optimizer optimize get $OPT_ID
# Deploy with gradual rollout
llm-optimizer optimize deploy $OPT_ID --gradual --percentage 10
# Monitor performance
llm-optimizer metrics performance --service production-gpt4
# Export last month's metrics
llm-optimizer metrics export \
--format csv \
--from $(date -d '1 month ago' +%Y-%m-%d) \
--to $(date +%Y-%m-%d) \
--output monthly-metrics.csv
# Get cost breakdown
llm-optimizer metrics cost --output json > cost-analysis.json
# View summary
llm-optimizer admin stats
# Add Prometheus integration
llm-optimizer integration add \
--integration-type prometheus \
--name "Production Metrics" \
--config '{"url": "http://prometheus:9090"}'
# Test connection
llm-optimizer integration test <integration-id>
# Add Slack notifications
llm-optimizer integration add \
--integration-type slack \
--name "Ops Channel" \
--config '{"webhook_url": "https://hooks.slack.com/..."}'
#!/bin/bash
# monitoring-check.sh
# Check system health
llm-optimizer admin health
# Get performance metrics
llm-optimizer metrics performance | grep "Error Rate"
# List active optimizations
ACTIVE=$(llm-optimizer optimize list --status deployed --output json | jq 'length')
echo "Active optimizations: $ACTIVE"
# Check for issues
if [ $ACTIVE -eq 0 ]; then
echo "Warning: No active optimizations"
fi
# Get all optimizations with cost savings > $100
llm-optimizer optimize list --output json | \
jq '.[] | select(.actual_impact.cost_reduction_pct > 10)'
# Find underperforming optimizations
llm-optimizer optimize list --output json | \
jq '.[] | select(.actual_impact.quality_delta_pct < -5)'
# .gitlab-ci.yml
optimize:
script:
- llm-optimizer optimize create --services $SERVICE_NAME --dry-run
- llm-optimizer admin health || exit 1
- llm-optimizer metrics cost --output json > cost-report.json
artifacts:
reports:
metrics: cost-report.json
#!/bin/bash
# alert-on-errors.sh
ERROR_RATE=$(llm-optimizer metrics performance --output json | \
jq -r '.error_rate')
if (( $(echo "$ERROR_RATE > 0.05" | bc -l) )); then
llm-optimizer optimize rollback $OPTIMIZATION_ID \
--reason "Error rate exceeded 5%" \
--yes
fi
# Check API connectivity
curl http://localhost:8080/health
# Verify configuration
llm-optimizer doctor
# Test with verbose logging
llm-optimizer --verbose admin health
# Set API key
export LLM_OPTIMIZER_API_KEY=your-key
# Or update config
llm-optimizer config set api_key '"your-key"'
# Increase timeout
llm-optimizer --timeout 60 optimize list
# Check system resources
llm-optimizer admin stats
# Build debug version
cargo build
# Build release version
cargo build --release
# Run tests
cargo test
# Install locally
cargo install --path .
# Unit tests
cargo test --lib
# Integration tests
cargo test --test '*'
# With output
cargo test -- --nocapture
Licensed under the Apache License, Version 2.0. See LICENSE file for details.
See CONTRIBUTING.md for guidelines.