logai

Crates.iologai
lib.rslogai
version0.1.1
created_at2025-11-19 14:57:33.987375+00
updated_at2025-11-19 17:48:24.880967+00
descriptionAI-powered log analyzer with MCP integration - Groups errors, suggests fixes, and connects external tools
homepagehttps://github.com/ranjan-mohanty/logai
repositoryhttps://github.com/ranjan-mohanty/logai
max_upload_size
id1940211
size594,933
Ranjan Mohanty (ranjan-mohanty)

documentation

https://github.com/ranjan-mohanty/logai/blob/main/docs/USAGE.md

README

🤖 LogAI

CI Crates.io License: MIT

AI-powered log analysis - Parse, group, and understand your logs with AI.

LogAI analyzes your application logs, groups similar errors, and uses AI to explain what went wrong and how to fix it.

What is LogAI?

LogAI is a CLI tool that analyzes application logs, groups similar errors, and provides intelligent suggestions for fixing issues. Stop manually searching through massive log files and let LogAI do the detective work.

Features

Multiple log formats - JSON, plain text, Apache, Nginx, Syslog
Auto-detect log format - Automatically identifies format
Group similar errors intelligently - Pattern-based grouping
✅ Deduplicate repeated errors
✅ Beautiful terminal output
✅ Track error frequency and timing
✅ AI-powered error explanations (OpenAI, Claude, Gemini, Ollama, AWS Bedrock)
Parallel AI analysis - Process multiple errors concurrently (5x faster)
Automatic retry - Exponential backoff for transient failures
✅ Solution suggestions with code examples
✅ Response caching to reduce API costs
Configuration file - Customize analysis behavior
MCP (Model Context Protocol) integration - Connect external tools and data sources

Coming Soon

🚧 Built-in MCP tools (search_docs, check_metrics, search_code)
🚧 Watch mode for real-time analysis
🚧 HTML reports
🚧 Additional log formats (Docker, Kubernetes, custom formats)

Quick Start

Installation

Quick Install (macOS/Linux)

curl -sSL https://raw.githubusercontent.com/ranjan-mohanty/logai/main/scripts/install.sh | bash

Homebrew (macOS/Linux)

brew install https://raw.githubusercontent.com/ranjan-mohanty/logai/main/scripts/homebrew/logai.rb

Cargo (All platforms)

cargo install logai

Pre-built Binaries

Download from GitHub Releases:

  • macOS (Intel & Apple Silicon)
  • Linux (x86_64 & ARM64)
    • Standard: logai-linux-x86_64.tar.gz (Ubuntu 22.04+, RHEL 9+, AL2023)
    • Musl: logai-linux-x86_64-musl.tar.gz (Amazon Linux 2, Ubuntu 20.04+, CentOS 7+, any Linux)
  • Windows (x86_64)

Amazon Linux 2:

wget https://github.com/ranjan-mohanty/logai/releases/latest/download/logai-linux-x86_64-musl.tar.gz
tar -xzf logai-linux-x86_64-musl.tar.gz
sudo mv logai /usr/local/bin/

From Source

git clone https://github.com/ranjan-mohanty/logai.git
cd logai
cargo install --path .

Usage

Analyze a log file:

logai investigate app.log

Analyze multiple files:

logai investigate app.log error.log

Pipe logs from stdin:

tail -f app.log | logai investigate -
cat error.log | logai investigate -

Limit output:

logai investigate app.log --limit 10

JSON output:

logai investigate app.log --format json

Interactive HTML report:

logai investigate app.log --format html > report.html
# With AI analysis
logai investigate app.log --ai bedrock --format html > report.html

Enable verbose/debug logging:

logai --verbose investigate app.log
# or
logai -v investigate app.log --ai bedrock

AI-Powered Analysis

Analyze with OpenAI:

export OPENAI_API_KEY=sk-...
logai investigate app.log --ai openai
logai investigate app.log --ai openai --model gpt-4

Analyze with Claude:

export ANTHROPIC_API_KEY=sk-ant-...
logai investigate app.log --ai claude
logai investigate app.log --ai claude --model claude-3-5-sonnet-20241022

Analyze with Gemini:

export GEMINI_API_KEY=...
logai investigate app.log --ai gemini
logai investigate app.log --ai gemini --model gemini-1.5-pro

Analyze with Ollama (local, free):

# Make sure Ollama is running: ollama serve
logai investigate app.log --ai ollama
logai investigate app.log --ai ollama --model llama3.2

Analyze with AWS Bedrock:

# With region flag (recommended)
logai investigate app.log --ai bedrock --region us-east-1

# With specific model
logai investigate app.log --ai bedrock --region us-east-1 --model anthropic.claude-3-haiku-20240307-v1:0

# Or set region via environment variable
export AWS_REGION=us-east-1
logai investigate app.log --ai bedrock

Disable caching (force fresh analysis):

logai investigate app.log --ai openai --no-cache

Parallel Analysis

LogAI processes error groups in parallel for faster analysis. Control concurrency:

# Default: 5 concurrent requests
logai investigate app.log --ai ollama

# High concurrency (faster, more resources)
logai investigate app.log --ai ollama --concurrency 15

# Low concurrency (slower, less resources)
logai investigate app.log --ai ollama --concurrency 2

# Sequential processing
logai investigate app.log --ai ollama --concurrency 1

Performance comparison (100 error groups):

  • Sequential (concurrency=1): ~25 minutes
  • Default (concurrency=5): ~5 minutes
  • High (concurrency=15): ~2 minutes

Configuration File

Create ~/.logai/config.toml to set defaults:

# AI Settings
[ai]
provider = "ollama"  # Default AI provider

# Analysis settings
[analysis]
max_concurrency = 5        # Concurrent AI requests (1-20)
enable_retry = true        # Retry failed requests
max_retries = 3            # Maximum retry attempts
initial_backoff_ms = 1000  # Initial retry delay
max_backoff_ms = 30000     # Maximum retry delay
enable_cache = true        # Cache AI responses
truncate_length = 2000     # Max message length

# Provider configurations
[providers.ollama]
enabled = true
model = "llama3.2"
host = "http://localhost:11434"

[providers.openai]
enabled = false
# api_key = "sk-..."  # Or use OPENAI_API_KEY env var
# model = "gpt-4"

Configuration examples:

High-performance (self-hosted Ollama):

[analysis]
max_concurrency = 15
max_retries = 2
initial_backoff_ms = 500

Conservative (API rate limits):

[analysis]
max_concurrency = 2
max_retries = 5
initial_backoff_ms = 2000
max_backoff_ms = 60000

Fast-fail (development):

[analysis]
max_concurrency = 10
enable_retry = false

MCP Integration (Advanced)

LogAI supports Model Context Protocol (MCP) to connect external tools and data sources during analysis.

Create ~/.logai/mcp.toml:

default_timeout = 30

[[servers]]
name = "filesystem"
enabled = true

[servers.connection]
type = "Stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]

Use with MCP tools:

logai investigate app.log --ai ollama --mcp-config ~/.logai/mcp.toml

Disable MCP:

logai investigate app.log --ai ollama --no-mcp

See MCP Integration Guide for more details.

Example Output

🤖 LogAI Analysis Report
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

📊 Summary
   Errors found: 3 unique patterns (9 occurrences)
   Time range: 2025-11-17 10:30:00 - 2025-11-17 10:35:00

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

🔴 Critical: Connection failed to database (3 occurrences)
   First seen: 5 minutes ago | Last seen: 4 minutes ago

   📋 Example:
   Connection failed to database
   📍 Location: db.rs:42

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

🔴 Critical: Timeout waiting for response from <DYNAMIC> (3 occurrences)
   First seen: 1 minute ago | Last seen: 30 seconds ago

   📋 Example:
   Timeout waiting for response from api.example.com

Supported Log Formats

  • JSON logs - Structured logs with fields like level, message, timestamp
  • Plain text logs - Traditional text logs with timestamps and severity levels
  • Apache logs - Apache HTTP server access and error logs (Common and Combined formats)
  • Nginx logs - Nginx web server access and error logs
  • Syslog - System logs in RFC3164 and RFC5424 formats
  • Auto-detection - Automatically detects format from log content

Development

Build:

cargo build

Run tests:

cargo test

Run with sample logs:

cargo run -- investigate tests/fixtures/sample.log

Supported AI Providers

Provider Models Cost Speed Setup
OpenAI GPT-4, GPT-4o-mini Paid Fast API key required
Claude Claude 3.5 Sonnet/Haiku Paid Fast API key required
Gemini Gemini 1.5 Flash/Pro Paid Fast API key required
Bedrock Claude, Llama, Titan Paid Fast AWS credentials
Ollama Llama 3.2, Mistral, etc. Free Medium Local install

How It Works

  1. Parse - Automatically detects log format (JSON, plain text)
  2. Group - Clusters similar errors by normalizing dynamic values
  3. Deduplicate - Shows unique patterns with occurrence counts
  4. Analyze - Uses AI to explain errors and suggest fixes (optional)
    • Processes multiple error groups in parallel (configurable concurrency)
    • Automatic retry with exponential backoff for transient failures
    • Real-time progress tracking with throughput and ETA
  5. Cache - Stores AI responses locally to reduce costs

Roadmap

  • Core parsing and grouping
  • AI integration (OpenAI, Claude, Gemini, Ollama)
  • Response caching
  • MCP (Model Context Protocol) integration
  • Built-in MCP tools (search_docs, check_metrics, search_code, query_logs)
  • Watch mode for real-time analysis
  • HTML reports
  • Advanced log format support (Apache, Nginx, Syslog)
  • Anomaly detection and trend analysis

Documentation

Getting Started

For Developers

Operations

Reference

Community

Contributing

Contributions are welcome! Please read our Contributing Guide and Code of Conduct.

Future Plans

See GitHub Issues for planned features and known issues.

License

MIT License - see LICENSE file

Author

Built with ❤️ by Ranjan Mohanty

Acknowledgments

  • Inspired by the need for better log debugging tools
  • Thanks to all AI providers for making this possible
  • Built with Rust 🦀

Star History

If you find LogAI useful, please consider giving it a star ⭐

Support

Commit count: 0

cargo fmt