| Crates.io | logai |
| lib.rs | logai |
| version | 0.1.1 |
| created_at | 2025-11-19 14:57:33.987375+00 |
| updated_at | 2025-11-19 17:48:24.880967+00 |
| description | AI-powered log analyzer with MCP integration - Groups errors, suggests fixes, and connects external tools |
| homepage | https://github.com/ranjan-mohanty/logai |
| repository | https://github.com/ranjan-mohanty/logai |
| max_upload_size | |
| id | 1940211 |
| size | 594,933 |
AI-powered log analysis - Parse, group, and understand your logs with AI.
LogAI analyzes your application logs, groups similar errors, and uses AI to explain what went wrong and how to fix it.
LogAI is a CLI tool that analyzes application logs, groups similar errors, and provides intelligent suggestions for fixing issues. Stop manually searching through massive log files and let LogAI do the detective work.
✅ Multiple log formats - JSON, plain text, Apache, Nginx, Syslog
✅ Auto-detect log format - Automatically identifies format
✅ Group similar errors intelligently - Pattern-based grouping
✅ Deduplicate repeated errors
✅ Beautiful terminal output
✅ Track error frequency and timing
✅ AI-powered error explanations (OpenAI, Claude, Gemini, Ollama, AWS Bedrock)
✅ Parallel AI analysis - Process multiple errors concurrently (5x faster)
✅ Automatic retry - Exponential backoff for transient failures
✅ Solution suggestions with code examples
✅ Response caching to reduce API costs
✅ Configuration file - Customize analysis behavior
✅ MCP (Model Context Protocol) integration - Connect external tools and
data sources
🚧 Built-in MCP tools (search_docs, check_metrics, search_code)
🚧 Watch mode for real-time analysis
🚧 HTML reports
🚧 Additional log formats (Docker, Kubernetes, custom formats)
curl -sSL https://raw.githubusercontent.com/ranjan-mohanty/logai/main/scripts/install.sh | bash
brew install https://raw.githubusercontent.com/ranjan-mohanty/logai/main/scripts/homebrew/logai.rb
cargo install logai
Download from GitHub Releases:
logai-linux-x86_64.tar.gz (Ubuntu 22.04+, RHEL 9+, AL2023)logai-linux-x86_64-musl.tar.gz (Amazon Linux 2, Ubuntu 20.04+,
CentOS 7+, any Linux)Amazon Linux 2:
wget https://github.com/ranjan-mohanty/logai/releases/latest/download/logai-linux-x86_64-musl.tar.gz
tar -xzf logai-linux-x86_64-musl.tar.gz
sudo mv logai /usr/local/bin/
git clone https://github.com/ranjan-mohanty/logai.git
cd logai
cargo install --path .
Analyze a log file:
logai investigate app.log
Analyze multiple files:
logai investigate app.log error.log
Pipe logs from stdin:
tail -f app.log | logai investigate -
cat error.log | logai investigate -
Limit output:
logai investigate app.log --limit 10
JSON output:
logai investigate app.log --format json
Interactive HTML report:
logai investigate app.log --format html > report.html
# With AI analysis
logai investigate app.log --ai bedrock --format html > report.html
Enable verbose/debug logging:
logai --verbose investigate app.log
# or
logai -v investigate app.log --ai bedrock
Analyze with OpenAI:
export OPENAI_API_KEY=sk-...
logai investigate app.log --ai openai
logai investigate app.log --ai openai --model gpt-4
Analyze with Claude:
export ANTHROPIC_API_KEY=sk-ant-...
logai investigate app.log --ai claude
logai investigate app.log --ai claude --model claude-3-5-sonnet-20241022
Analyze with Gemini:
export GEMINI_API_KEY=...
logai investigate app.log --ai gemini
logai investigate app.log --ai gemini --model gemini-1.5-pro
Analyze with Ollama (local, free):
# Make sure Ollama is running: ollama serve
logai investigate app.log --ai ollama
logai investigate app.log --ai ollama --model llama3.2
Analyze with AWS Bedrock:
# With region flag (recommended)
logai investigate app.log --ai bedrock --region us-east-1
# With specific model
logai investigate app.log --ai bedrock --region us-east-1 --model anthropic.claude-3-haiku-20240307-v1:0
# Or set region via environment variable
export AWS_REGION=us-east-1
logai investigate app.log --ai bedrock
Disable caching (force fresh analysis):
logai investigate app.log --ai openai --no-cache
LogAI processes error groups in parallel for faster analysis. Control concurrency:
# Default: 5 concurrent requests
logai investigate app.log --ai ollama
# High concurrency (faster, more resources)
logai investigate app.log --ai ollama --concurrency 15
# Low concurrency (slower, less resources)
logai investigate app.log --ai ollama --concurrency 2
# Sequential processing
logai investigate app.log --ai ollama --concurrency 1
Performance comparison (100 error groups):
Create ~/.logai/config.toml to set defaults:
# AI Settings
[ai]
provider = "ollama" # Default AI provider
# Analysis settings
[analysis]
max_concurrency = 5 # Concurrent AI requests (1-20)
enable_retry = true # Retry failed requests
max_retries = 3 # Maximum retry attempts
initial_backoff_ms = 1000 # Initial retry delay
max_backoff_ms = 30000 # Maximum retry delay
enable_cache = true # Cache AI responses
truncate_length = 2000 # Max message length
# Provider configurations
[providers.ollama]
enabled = true
model = "llama3.2"
host = "http://localhost:11434"
[providers.openai]
enabled = false
# api_key = "sk-..." # Or use OPENAI_API_KEY env var
# model = "gpt-4"
Configuration examples:
High-performance (self-hosted Ollama):
[analysis]
max_concurrency = 15
max_retries = 2
initial_backoff_ms = 500
Conservative (API rate limits):
[analysis]
max_concurrency = 2
max_retries = 5
initial_backoff_ms = 2000
max_backoff_ms = 60000
Fast-fail (development):
[analysis]
max_concurrency = 10
enable_retry = false
LogAI supports Model Context Protocol (MCP) to connect external tools and data sources during analysis.
Create ~/.logai/mcp.toml:
default_timeout = 30
[[servers]]
name = "filesystem"
enabled = true
[servers.connection]
type = "Stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
Use with MCP tools:
logai investigate app.log --ai ollama --mcp-config ~/.logai/mcp.toml
Disable MCP:
logai investigate app.log --ai ollama --no-mcp
See MCP Integration Guide for more details.
🤖 LogAI Analysis Report
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 Summary
Errors found: 3 unique patterns (9 occurrences)
Time range: 2025-11-17 10:30:00 - 2025-11-17 10:35:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔴 Critical: Connection failed to database (3 occurrences)
First seen: 5 minutes ago | Last seen: 4 minutes ago
📋 Example:
Connection failed to database
📍 Location: db.rs:42
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔴 Critical: Timeout waiting for response from <DYNAMIC> (3 occurrences)
First seen: 1 minute ago | Last seen: 30 seconds ago
📋 Example:
Timeout waiting for response from api.example.com
level, message,
timestampBuild:
cargo build
Run tests:
cargo test
Run with sample logs:
cargo run -- investigate tests/fixtures/sample.log
| Provider | Models | Cost | Speed | Setup |
|---|---|---|---|---|
| OpenAI | GPT-4, GPT-4o-mini | Paid | Fast | API key required |
| Claude | Claude 3.5 Sonnet/Haiku | Paid | Fast | API key required |
| Gemini | Gemini 1.5 Flash/Pro | Paid | Fast | API key required |
| Bedrock | Claude, Llama, Titan | Paid | Fast | AWS credentials |
| Ollama | Llama 3.2, Mistral, etc. | Free | Medium | Local install |
Contributions are welcome! Please read our Contributing Guide and Code of Conduct.
See GitHub Issues for planned features and known issues.
MIT License - see LICENSE file
Built with ❤️ by Ranjan Mohanty
If you find LogAI useful, please consider giving it a star ⭐