| Crates.io | vex-ai |
| lib.rs | vex-ai |
| version | 0.1.0 |
| created_at | 2025-09-02 07:48:31.019013+00 |
| updated_at | 2025-09-02 07:48:31.019013+00 |
| description | VEX - Your local AI coding assistant with attitude and deep expertise |
| homepage | https://jmevatt.github.io/vex/ |
| repository | https://github.com/jmevatt/vex |
| max_upload_size | |
| id | 1820730 |
| size | 412,734 |
Your local AI coding assistant with attitude and deep expertise. VEX helps developers with code analysis, debugging, file operations, and development workflows - all while keeping your code private and secure.
git clone https://github.com/jmevatt/vex.git
cd vex
# Setup VEX + Ollama (one-time setup)
./quick-start.sh
# Run VEX on your project (run this from your project directory)
cd /path/to/your/project
/path/to/vex/vex-docker.sh
This will automatically:
cargo install vex-ai
git clone https://github.com/jmevatt/vex.git
cd vex
cargo build --release
sudo cp target/release/vex /usr/local/bin/
# Start services
docker-compose up -d
# Download models
docker-compose --profile setup run --rm model-downloader
# Use VEX
docker-compose exec vex vex
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a compatible model (recommended)
ollama pull qwen2.5:32b
# Or for faster responses with less memory:
ollama pull qwen2.5:14b
# Install Poetry (if not already installed)
curl -sSL https://install.python-poetry.org | python3 -
# Install dependencies
cd vex
poetry install
vex
# Start interactive session
vex
# Or with a specific model
vex --model qwen2.5:32b
# Execute a single command
vex exec "analyze this rust project structure"
# Quick file operations
vex exec "show me the main function in src/main.rs"
🔍 Code Analysis
> analyze the performance bottlenecks in this codebase
> explain how the authentication system works
> find all TODO comments and create a summary
🐛 Debugging
> help me debug this compilation error
> trace through this function and explain the logic
> suggest improvements for this algorithm
🛠️ Development Tasks
> implement error handling for this API endpoint
> write unit tests for the user service
> refactor this code to use async/await
| Tool | Description |
|---|---|
| read | Read file contents with syntax highlighting |
| write | Create new files with safety checks |
| edit | Edit existing files with smart replacements |
| multi_edit | Batch edit multiple files efficiently |
| glob | Find files using glob patterns |
| grep | Search through files with regex support |
| bash | Execute shell commands safely |
| git_* | Full git integration (status, add, commit, etc.) |
| web_search | Search the web for current information |
| web_fetch | Fetch and analyze web content |
| todo_write | Create and manage project todo lists |
VEX can handle complex, multi-step tasks autonomously:
vex exec "analyze this codebase, identify security issues, create a todo list for fixes, and generate a security report"
Built-in safety measures prevent:
rm -rf /, etc.)Beautiful terminal output with support for:
IMPORTANT: VEX needs access to your project files! Always use the wrapper script or mount your project directory.
# Navigate to your project
cd /path/to/your/awesome/project
# Run VEX interactively (mounts current directory)
/path/to/vex/vex-docker.sh
# Run a single command
/path/to/vex/vex-docker.sh "analyze this codebase"
# Specify different project directory
/path/to/vex/vex-docker.sh /path/to/other/project
/path/to/vex/vex-docker.sh /path/to/other/project "read main.py"
# Interactive mode with project mounted
docker-compose run --rm -it -v $(pwd):/workspace vex vex
# Single command with project mounted
docker-compose run --rm -v $(pwd):/workspace vex vex exec "check git status"
# Mount specific directory
docker-compose run --rm -v /path/to/project:/workspace vex vex exec "read README.md"
# Check logs
docker-compose logs -f ollama
# Stop services
docker-compose down
Once your project is mounted, VEX can:
# Read your project files
vex exec "read src/main.rs"
# Analyze code structure
vex exec "analyze the project structure"
# Check git status
vex exec "git status"
# Search for patterns
vex exec "find all TODO comments in the code"
Uncomment the GPU sections in docker-compose.yml for hardware acceleration:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
# Use different Ollama models
vex --model llama3.1:8b
vex --model codellama:34b
vex --model qwen2.5:72b
# Custom Ollama endpoint
vex --ollama-url http://192.168.1.100:11434
export VEX_MODEL="qwen2.5:32b"
export VEX_OLLAMA_URL="http://localhost:11434"
export VEX_DEBUG=1 # Enable debug logging
VEX automatically saves your chat history to:
~/.config/vex/chat_history.txt%APPDATA%\vex\chat_history.txtVEX includes comprehensive safety measures:
We welcome contributions! Please see our Contributing Guidelines for details.
git clone https://github.com/jmevatt/vex.git
cd vex
# Install dependencies
cargo build
# Run tests
cargo test
# Run with coverage
cargo tarpaulin --out html
# Format code
cargo fmt
# Lint code
cargo clippy
vex/
├── src/
│ ├── core/ # Core VEX functionality
│ ├── models/ # AI model integrations
│ ├── tools/ # Built-in tools
│ ├── cli/ # CLI interface
│ └── bin/ # Binary executables
├── tests/ # Integration tests
├── docs/ # Documentation
└── scripts/ # Utility scripts
Ollama Connection Failed
# Check if Ollama is running
ollama list
# Start Ollama service
systemctl start ollama # Linux
brew services start ollama # macOS
Model Not Found
# Pull the required model
ollama pull qwen2.5:32b
Poetry Installation Issues
# Reset poetry environment
poetry env remove python
poetry install
Permission Denied
# Fix VEX binary permissions
chmod +x /usr/local/bin/vex
This project is licensed under the MIT License - see the LICENSE file for details.
Made with 🔥 by the VEX team
GitHub • Issues • Discussions