vex-ai

Crates.iovex-ai
lib.rsvex-ai
version0.1.0
created_at2025-09-02 07:48:31.019013+00
updated_at2025-09-02 07:48:31.019013+00
descriptionVEX - Your local AI coding assistant with attitude and deep expertise
homepagehttps://jmevatt.github.io/vex/
repositoryhttps://github.com/jmevatt/vex
max_upload_size
id1820730
size412,734
Jordan Evatt (jmevatt)

documentation

https://github.com/jmevatt/vex#readme

README

🔥 VEX - AI Coding Assistant with Attitude

Build Status Coverage Status License: MIT Rust Version Crates.io

Your local AI coding assistant with attitude and deep expertise. VEX helps developers with code analysis, debugging, file operations, and development workflows - all while keeping your code private and secure.

✨ Features

  • 🧠 Local AI Integration - Works with Ollama and local LLM models
  • 🔒 Privacy First - All processing happens locally, your code never leaves your machine
  • 🎨 Syntax Highlighting - Beautiful code display with 20+ language support
  • 🛡️ Safety Built-in - Advanced content filtering and command safety checks
  • 🔧 Rich Tool Set - File operations, git integration, bash execution, and more
  • 💬 Enhanced Chat - Interactive CLI with history, auto-complete, and shortcuts
  • 🚀 Workflow Engine - Multi-step task automation with reasoning chains
  • 📊 Progress Tracking - Built-in todo management for complex projects

🚀 Quick Start

Prerequisites

  • Rust 1.70+ (Install Rust)
  • Ollama with a compatible model (Install Ollama)
  • Python 3.8+ (optional, for enhanced content filtering)

Installation

🚀 Quick Start with Docker (Easiest)

git clone https://github.com/jmevatt/vex.git
cd vex

# Setup VEX + Ollama (one-time setup)
./quick-start.sh

# Run VEX on your project (run this from your project directory)
cd /path/to/your/project
/path/to/vex/vex-docker.sh

This will automatically:

  • Start Ollama service locally
  • Download AI models (qwen2.5:32b or alternatives)
  • Mount your project files into VEX container
  • Start VEX with access to your code

From Crates.io (Recommended for local install)

cargo install vex-ai

From Source

git clone https://github.com/jmevatt/vex.git
cd vex
cargo build --release
sudo cp target/release/vex /usr/local/bin/

Docker Compose (Manual setup)

# Start services
docker-compose up -d

# Download models
docker-compose --profile setup run --rm model-downloader

# Use VEX
docker-compose exec vex vex

Setup

  1. Install and start Ollama:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a compatible model (recommended)
ollama pull qwen2.5:32b
# Or for faster responses with less memory:
ollama pull qwen2.5:14b
  1. Optional: Install Detoxify for enhanced safety:
# Install Poetry (if not already installed)
curl -sSL https://install.python-poetry.org | python3 -

# Install dependencies
cd vex
poetry install
  1. Run VEX:
vex

🎯 Usage Examples

Interactive Chat Mode

# Start interactive session
vex

# Or with a specific model
vex --model qwen2.5:32b

Single Command Execution

# Execute a single command
vex exec "analyze this rust project structure"

# Quick file operations
vex exec "show me the main function in src/main.rs"

Common Workflows

🔍 Code Analysis

> analyze the performance bottlenecks in this codebase
> explain how the authentication system works
> find all TODO comments and create a summary

🐛 Debugging

> help me debug this compilation error
> trace through this function and explain the logic
> suggest improvements for this algorithm

🛠️ Development Tasks

> implement error handling for this API endpoint
> write unit tests for the user service
> refactor this code to use async/await

🔧 Available Tools

Tool Description
read Read file contents with syntax highlighting
write Create new files with safety checks
edit Edit existing files with smart replacements
multi_edit Batch edit multiple files efficiently
glob Find files using glob patterns
grep Search through files with regex support
bash Execute shell commands safely
git_* Full git integration (status, add, commit, etc.)
web_search Search the web for current information
web_fetch Fetch and analyze web content
todo_write Create and manage project todo lists

⚡ Advanced Features

Workflow Engine

VEX can handle complex, multi-step tasks autonomously:

vex exec "analyze this codebase, identify security issues, create a todo list for fixes, and generate a security report"

Content Filtering

Built-in safety measures prevent:

  • Destructive system commands (rm -rf /, etc.)
  • Harmful content generation
  • Unsafe code suggestions
  • Data exfiltration attempts

Syntax Highlighting

Beautiful terminal output with support for:

  • Rust, Python, JavaScript, TypeScript, Go, Java
  • Shell scripts, SQL, JSON, YAML, TOML
  • Markdown, HTML, CSS, and more

🐳 Docker Usage

IMPORTANT: VEX needs access to your project files! Always use the wrapper script or mount your project directory.

Using the Wrapper Script (Recommended)

# Navigate to your project
cd /path/to/your/awesome/project

# Run VEX interactively (mounts current directory)
/path/to/vex/vex-docker.sh

# Run a single command
/path/to/vex/vex-docker.sh "analyze this codebase"

# Specify different project directory
/path/to/vex/vex-docker.sh /path/to/other/project
/path/to/vex/vex-docker.sh /path/to/other/project "read main.py"

Manual Docker Commands

# Interactive mode with project mounted
docker-compose run --rm -it -v $(pwd):/workspace vex vex

# Single command with project mounted
docker-compose run --rm -v $(pwd):/workspace vex vex exec "check git status"

# Mount specific directory
docker-compose run --rm -v /path/to/project:/workspace vex vex exec "read README.md"

# Check logs
docker-compose logs -f ollama

# Stop services
docker-compose down

File Access Examples

Once your project is mounted, VEX can:

# Read your project files
vex exec "read src/main.rs"

# Analyze code structure  
vex exec "analyze the project structure"

# Check git status
vex exec "git status"

# Search for patterns
vex exec "find all TODO comments in the code"

GPU Support (NVIDIA)

Uncomment the GPU sections in docker-compose.yml for hardware acceleration:

deploy:
  resources:
    reservations:
      devices:
        - driver: nvidia
          count: 1
          capabilities: [gpu]

🔧 Configuration

Model Configuration

# Use different Ollama models
vex --model llama3.1:8b
vex --model codellama:34b
vex --model qwen2.5:72b

# Custom Ollama endpoint  
vex --ollama-url http://192.168.1.100:11434

Environment Variables

export VEX_MODEL="qwen2.5:32b"
export VEX_OLLAMA_URL="http://localhost:11434"
export VEX_DEBUG=1  # Enable debug logging

Chat History

VEX automatically saves your chat history to:

  • Linux/macOS: ~/.config/vex/chat_history.txt
  • Windows: %APPDATA%\vex\chat_history.txt

🛡️ Security & Safety

VEX includes comprehensive safety measures:

  • Command Filtering: Prevents destructive system operations
  • Content Filtering: Blocks harmful content generation using local ML models
  • File Safety: Automatic backups before destructive file operations
  • Privacy First: All processing happens locally - no data sent to external services

🤝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

Development Setup

git clone https://github.com/jmevatt/vex.git
cd vex

# Install dependencies
cargo build

# Run tests
cargo test

# Run with coverage
cargo tarpaulin --out html

# Format code
cargo fmt

# Lint code
cargo clippy

Project Structure

vex/
├── src/
│   ├── core/           # Core VEX functionality
│   ├── models/         # AI model integrations  
│   ├── tools/          # Built-in tools
│   ├── cli/            # CLI interface
│   └── bin/            # Binary executables
├── tests/              # Integration tests
├── docs/               # Documentation
└── scripts/            # Utility scripts

📋 Roadmap

  • VSCode Extension - Integrate VEX directly into your editor
  • Web Interface - Optional web UI for team collaboration
  • Plugin System - Custom tool development API
  • Cloud Sync - Encrypted chat history synchronization
  • Multi-Language Support - Internationalization
  • Performance Monitoring - Built-in profiling and metrics

🐛 Troubleshooting

Common Issues

Ollama Connection Failed

# Check if Ollama is running
ollama list

# Start Ollama service  
systemctl start ollama  # Linux
brew services start ollama  # macOS

Model Not Found

# Pull the required model
ollama pull qwen2.5:32b

Poetry Installation Issues

# Reset poetry environment
poetry env remove python
poetry install

Permission Denied

# Fix VEX binary permissions
chmod +x /usr/local/bin/vex

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Ollama - Local LLM runtime
  • Qwen2.5 - Powerful coding model
  • Rust Community - Amazing ecosystem and tools
  • Syntect - Syntax highlighting library
  • Detoxify - Content safety filtering

📞 Support


Made with 🔥 by the VEX team

GitHubIssuesDiscussions

Commit count: 0

cargo fmt