cmdai

Crates.iocmdai
lib.rscmdai
version0.1.0
created_at2025-12-15 12:25:22.020432+00
updated_at2025-12-15 12:25:22.020432+00
descriptionConvert natural language to shell commands using local LLMs
homepage
repositoryhttps://github.com/wildcard/cmdai
max_upload_size
id1985914
size10,278,171
Kobi Kadosh (wildcard)

documentation

README

cmdai

🚧 Early Development Stage - Architecture defined, core implementation in progress

cmdai converts natural language descriptions into safe POSIX shell commands using local LLMs. Built with Rust for blazing-fast performance, single-binary distribution, and safety-first design.

$ cmdai "list all PDF files in Downloads folder larger than 10MB"
Generated command:
  find ~/Downloads -name "*.pdf" -size +10M -ls

Execute this command? (y/N) y

πŸ“‹ Project Status

This project is in active early development. The architecture and module structure are in place, with implementation ongoing.

βœ… Completed

  • Core CLI structure with comprehensive argument parsing
  • Modular architecture with trait-based backends
  • Embedded model backend with MLX (Apple Silicon) and CPU variants ✨
  • Remote backend support (Ollama, vLLM) with automatic fallback ✨
  • Safety validation with pattern matching and risk assessment
  • Configuration management with TOML support
  • Interactive user confirmation flows
  • Multiple output formats (JSON, YAML, Plain)
  • Contract-based test structure with TDD methodology
  • Multi-platform CI/CD pipeline

🚧 In Progress

  • Model downloading and caching system
  • Advanced command execution engine
  • Performance optimization

πŸ“… Planned

  • Multi-step goal completion
  • Advanced context awareness
  • Shell script generation
  • Command history and learning

✨ Features (Planned & In Development)

  • πŸš€ Instant startup - Single binary with <100ms cold start (target)
  • 🧠 Local LLM inference - Optimized for Apple Silicon with MLX
  • πŸ›‘οΈ Safety-first - Comprehensive command validation framework
  • πŸ“¦ Zero dependencies - Self-contained binary distribution
  • 🎯 Multiple backends - Extensible backend system (MLX, vLLM, Ollama)
  • πŸ’Ύ Smart caching - Hugging Face model management
  • 🌐 Cross-platform - macOS, Linux, Windows support

πŸš€ Quick Start

Prerequisites

  • Rust 1.75+ with Cargo
  • CMake (for model inference backends)
  • macOS with Apple Silicon (optional, for GPU acceleration)
  • Xcode (optional, for full MLX GPU support on Apple Silicon)

Platform-Specific Setup

macOS (Recommended for Apple Silicon)

For complete macOS setup instructions including GPU acceleration, see macOS Setup Guide.

Quick Install:

# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source "$HOME/.cargo/env"

# Install CMake via Homebrew
brew install cmake

# Clone and build
git clone https://github.com/wildcard/cmdai.git
cd cmdai
cargo build --release

# Run
./target/release/cmdai "list all files"

For GPU Acceleration (Apple Silicon only):

  • Install Xcode from App Store (required for Metal compiler)
  • Build with: cargo build --release --features embedded-mlx
  • See macOS Setup Guide for details

Note: The default build uses a stub implementation that works immediately without Xcode. For production GPU acceleration, Xcode is required.

Linux

# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source "$HOME/.cargo/env"

# Install dependencies (Ubuntu/Debian)
sudo apt-get update
sudo apt-get install cmake build-essential

# Clone and build
git clone https://github.com/wildcard/cmdai.git
cd cmdai
cargo build --release

Windows

# Install Rust from https://rustup.rs
# Install CMake from https://cmake.org/download/

# Clone and build
git clone https://github.com/wildcard/cmdai.git
cd cmdai
cargo build --release

Building from Source

# Clone the repository
git clone https://github.com/wildcard/cmdai.git
cd cmdai

# Build the project (uses CPU backend by default)
cargo build --release

# Run the CLI
./target/release/cmdai --version

Development Commands

# Run tests
make test

# Format code
make fmt

# Run linter
make lint

# Build optimized binary
make build-release

# Run with debug logging
RUST_LOG=debug cargo run -- "your command"

πŸ“– Usage

Basic Syntax

cmdai [OPTIONS] <PROMPT>

Examples

# Basic command generation
cmdai "list all files in the current directory"

# With specific shell
cmdai --shell zsh "find large files"

# JSON output for scripting
cmdai --output json "show disk usage"

# Adjust safety level
cmdai --safety permissive "clean temporary files"

# Auto-confirm dangerous commands
cmdai --confirm "remove old log files"

# Verbose mode with timing info
cmdai --verbose "search for Python files"

CLI Options

Option Description Status
-s, --shell <SHELL> Target shell (bash, zsh, fish, sh, powershell, cmd) βœ… Implemented
--safety <LEVEL> Safety level (strict, moderate, permissive) βœ… Implemented
-o, --output <FORMAT> Output format (json, yaml, plain) βœ… Implemented
-y, --confirm Auto-confirm dangerous commands βœ… Implemented
-v, --verbose Enable verbose output with timing βœ… Implemented
-c, --config <FILE> Custom configuration file βœ… Implemented
--show-config Display current configuration βœ… Implemented
--auto Execute without confirmation πŸ“… Planned
--allow-dangerous Allow potentially dangerous commands πŸ“… Planned
--verbose Enable verbose logging βœ… Available

Examples (Target Functionality)

# Simple command generation
cmdai "compress all images in current directory"

# With specific backend
cmdai --backend mlx "find large log files"

# Verbose mode for debugging
cmdai --verbose "show disk usage"

πŸ—οΈ Architecture

Module Structure

cmdai/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ main.rs              # CLI entry point
β”‚   β”œβ”€β”€ backends/            # LLM backend implementations
β”‚   β”‚   β”œβ”€β”€ mod.rs          # Backend trait definition
β”‚   β”‚   β”œβ”€β”€ mlx.rs          # Apple Silicon MLX backend
β”‚   β”‚   β”œβ”€β”€ vllm.rs         # vLLM remote backend
β”‚   β”‚   └── ollama.rs       # Ollama local backend
β”‚   β”œβ”€β”€ safety/             # Command validation
β”‚   β”‚   └── mod.rs          # Safety validator
β”‚   β”œβ”€β”€ cache/              # Model caching
β”‚   β”œβ”€β”€ config/             # Configuration management
β”‚   β”œβ”€β”€ cli/                # CLI interface
β”‚   β”œβ”€β”€ models/             # Data models
β”‚   └── execution/          # Command execution
β”œβ”€β”€ tests/                   # Contract-based tests
└── specs/                  # Project specifications

Core Components

  1. CommandGenerator Trait - Unified interface for all LLM backends
  2. SafetyValidator - Command validation and risk assessment
  3. Backend System - Extensible architecture for multiple inference engines
  4. Cache Manager - Hugging Face model management (planned)

Backend Architecture

#[async_trait]
trait CommandGenerator {
    async fn generate_command(&self, request: &CommandRequest) 
        -> Result<GeneratedCommand, GeneratorError>;
    async fn is_available(&self) -> bool;
    fn backend_info(&self) -> BackendInfo;
}

πŸ”§ Development

Prerequisites

  • Rust 1.75+
  • Cargo
  • Make (optional, for convenience commands)
  • Docker (optional, for development container)

Setup Development Environment

# Clone and enter the project
git clone https://github.com/wildcard/cmdai.git
cd cmdai

# Install dependencies and build
cargo build

# Run tests
cargo test

# Check formatting
cargo fmt -- --check

# Run clippy linter
cargo clippy -- -D warnings

Backend Configuration

cmdai supports multiple inference backends with automatic fallback:

Embedded Backend (Default)

  • MLX: Optimized for Apple Silicon Macs (M1/M2/M3)
  • CPU: Cross-platform fallback using Candle framework
  • Model: Qwen2.5-Coder-1.5B-Instruct (quantized)
  • No external dependencies required

Remote Backends (Optional)

Configure in ~/.config/cmdai/config.toml:

[backend]
primary = "embedded"  # or "ollama", "vllm"
enable_fallback = true

[backend.ollama]
base_url = "http://localhost:11434"
model_name = "codellama:7b"

[backend.vllm]
base_url = "http://localhost:8000"
model_name = "codellama/CodeLlama-7b-hf"
api_key = "optional-api-key"

Project Configuration

The project uses several configuration files:

  • Cargo.toml - Rust dependencies and build configuration
  • ~/.config/cmdai/config.toml - User configuration
  • clippy.toml - Linter rules
  • rustfmt.toml - Code formatting rules
  • deny.toml - Dependency audit configuration

Testing Strategy

The project uses contract-based testing:

  • Unit tests for individual components
  • Integration tests for backend implementations
  • Contract tests to ensure trait compliance
  • Property-based testing for safety validation

πŸ›‘οΈ Safety Features

cmdai includes comprehensive safety validation to prevent dangerous operations:

Implemented Safety Checks

  • βœ… System destruction patterns (rm -rf /, rm -rf ~)
  • βœ… Fork bombs detection (:(){:|:&};:)
  • βœ… Disk operations (mkfs, dd if=/dev/zero)
  • βœ… Privilege escalation detection (sudo su, chmod 777 /)
  • βœ… Critical path protection (/bin, /usr, /etc)
  • βœ… Command validation and sanitization

Risk Levels

  • Safe (Green) - Normal operations, no confirmation needed
  • Moderate (Yellow) - Requires user confirmation in strict mode
  • High (Orange) - Requires confirmation in moderate mode
  • Critical (Red) - Blocked in strict mode, requires explicit confirmation

Safety Configuration

Configure safety levels in ~/.config/cmdai/config.toml:

[safety]
enabled = true
level = "moderate"  # strict, moderate, or permissive
require_confirmation = true
custom_patterns = ["additional", "dangerous", "patterns"]

🀝 Contributing

We welcome contributions! This is an early-stage project with many opportunities to contribute.

Areas for Contribution

  • πŸ”Œ Backend implementations
  • πŸ›‘οΈ Safety pattern definitions
  • πŸ§ͺ Test coverage expansion
  • πŸ“š Documentation improvements
  • πŸ› Bug fixes and optimizations

Getting Started

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes with tests
  4. Ensure all tests pass
  5. Submit a pull request

Development Guidelines

  • Follow Rust best practices
  • Add tests for new functionality
  • Update documentation as needed
  • Use conventional commit messages
  • Run make check before submitting

πŸ“œ License

This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0) - see the LICENSE file for details.

License Summary

  • βœ… Commercial use
  • βœ… Modification
  • βœ… Distribution
  • βœ… Private use
  • ⚠️ Network use requires source disclosure
  • ⚠️ Same license requirement
  • ⚠️ State changes documentation

πŸ™ Acknowledgments

  • MLX - Apple's machine learning framework
  • vLLM - High-performance LLM serving
  • Ollama - Local LLM runtime
  • Hugging Face - Model hosting and caching
  • clap - Command-line argument parsing

πŸ“ž Support & Community

  • πŸ› Bug Reports: GitHub Issues
  • πŸ’‘ Feature Requests: GitHub Discussions
  • πŸ“– Documentation: See /specs directory for detailed specifications

πŸ—ΊοΈ Roadmap

Phase 1: Core Structure (Current)

  • CLI argument parsing
  • Module architecture
  • Backend trait system
  • Basic command generation

Phase 2: Safety & Validation

  • Dangerous pattern detection
  • POSIX compliance checking
  • User confirmation workflows
  • Risk assessment system

Phase 3: Backend Integration

  • vLLM HTTP API support
  • Ollama local backend
  • Response parsing
  • Error handling

Phase 4: MLX Optimization

  • FFI bindings with cxx
  • Metal Performance Shaders
  • Unified memory handling
  • Apple Silicon optimization

Phase 5: Production Ready

  • Comprehensive testing
  • Performance optimization
  • Binary distribution
  • Package manager support

Built with Rust | Safety First | Open Source

Note: This is an active development project. Features and APIs are subject to change. See the specs directory for detailed design documentation.

Commit count: 0

cargo fmt