| Crates.io | paiml-mcp-agent-toolkit |
| lib.rs | paiml-mcp-agent-toolkit |
| version | 0.26.4 |
| created_at | 2025-07-02 20:54:21.99826+00 |
| updated_at | 2025-07-03 14:08:52.893117+00 |
| description | DEPRECATED: This crate has been renamed to 'pmat'. Please use 'pmat' instead for new projects. |
| homepage | https://paiml.com |
| repository | https://github.com/paiml/paiml-mcp-agent-toolkit |
| max_upload_size | |
| id | 1735602 |
| size | 4,704,612 |
Zero-configuration AI context generation system that analyzes any codebase instantly through CLI, MCP, or HTTP interfaces. Built by Pragmatic AI Labs with extreme quality standards and zero tolerance for technical debt.
Install pmat using one of the following methods:
From Crates.io (Recommended):
cargo install pmat
With the Quick Install Script (Linux/macOS):
curl -sSfL https://raw.githubusercontent.com/paiml/paiml-mcp-agent-toolkit/master/scripts/install.sh | sh
From Source:
git clone https://github.com/paiml/paiml-mcp-agent-toolkit
cd paiml-mcp-agent-toolkit
cargo build --release
From GitHub Releases: Pre-built binaries for Linux, macOS, and Windows are available on the releases page.
# Analyze current directory
pmat context
# Get complexity metrics for top 10 files
pmat analyze complexity --top-files 10
# Find technical debt
pmat analyze satd
# Run comprehensive quality checks
pmat quality-gate --strict
Add to your Cargo.toml:
[dependencies]
pmat = "0.27.0"
Basic usage:
use pmat::{
services::code_analysis::CodeAnalysisService,
types::ProjectPath,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let service = CodeAnalysisService::new();
let path = ProjectPath::new(".");
// Generate context
let context = service.generate_context(path, None).await?;
println!("Project context: {}", context);
// Analyze complexity
let complexity = service.analyze_complexity(path, Some(10)).await?;
println!("Complexity results: {:?}", complexity);
Ok(())
}
pmat refactor auto achieves extreme quality standards
pmat refactor auto --single-file-mode --file path/to/file.rs for targeted refactoringpmat refactor docs maintains Zero Tolerance Quality Standardspmat enforce extreme --file path/to/file.rs for file-specific enforcementpmat lint-hotspot --file path/to/file.rs for targeted analysis# Zero-configuration context generation
pmat context # Auto-detects language
pmat context --format json # JSON output
pmat context rust # Force language
# Code analysis
pmat analyze complexity --top-files 5 # Complexity analysis
pmat analyze churn --days 30 # Git history analysis
pmat analyze dag --target-nodes 25 # Dependency graph
pmat analyze dead-code --format json # Dead code detection
pmat analyze satd --top-files 10 # Technical debt
pmat analyze deep-context --format json # Comprehensive analysis
pmat analyze big-o # Big-O complexity analysis
pmat analyze makefile-lint # Makefile quality linting
pmat analyze proof-annotations # Provability analysis
# Analysis commands
pmat analyze graph-metrics # Graph centrality metrics (PageRank, betweenness, closeness)
pmat analyze name-similarity "function_name" # Fuzzy name matching with phonetic support
pmat analyze symbol-table # Symbol extraction with cross-references
pmat analyze duplicates --min-lines 10 # Code duplication detection
pmat quality-gate --strict # Comprehensive quality enforcement
pmat diagnose --verbose # Self-diagnostics and health checks
# WebAssembly Support
pmat analyze assemblyscript --wasm-complexity # AssemblyScript analysis with WASM metrics
pmat analyze webassembly --include-binary # WebAssembly binary and text format analysis
# Project scaffolding
pmat scaffold rust --templates makefile,readme,gitignore
pmat list # Available templates
# Refactoring engine
pmat refactor interactive # Interactive refactoring
pmat refactor serve --config refactor.json # Batch refactoring
pmat refactor status # Check refactor progress
pmat refactor resume # Resume from checkpoint
pmat refactor auto # AI-powered automatic refactoring
pmat refactor docs --dry-run # Clean up documentation
# Demo and visualization
pmat demo --format table # CLI demo
pmat demo --web --port 8080 # Web interface
pmat demo --repo https://github.com/user/repo # Analyze GitHub repo
# Quality enforcement
pmat quality-gate --fail-on-violation # Run all quality checks
pmat enforce extreme # Enforce extreme quality standards
# Add to Claude Code
claude mcp add pmat ~/.local/bin/pmat
Available MCP tools:
generate_template - Generate project files from templatesscaffold_project - Generate complete project structureanalyze_complexity - Code complexity metricsanalyze_code_churn - Git history analysisanalyze_dag - Dependency graph generationanalyze_dead_code - Dead code detectionanalyze_deep_context - Comprehensive analysisgenerate_context - Zero-config context generationanalyze_big_o - Big-O complexity analysis with confidence scoresanalyze_makefile_lint - Lint Makefiles with 50+ quality rulesanalyze_proof_annotations - Lightweight formal verificationanalyze_graph_metrics - Graph centrality and PageRank analysisrefactor_interactive - Interactive refactoring with explanations# Start server
pmat serve --port 8080 --cors
# API endpoints
curl "http://localhost:8080/health"
curl "http://localhost:8080/api/v1/analyze/complexity?top_files=5"
curl "http://localhost:8080/api/v1/templates"
# POST analysis
curl -X POST "http://localhost:8080/api/v1/analyze/deep-context" \
-H "Content-Type: application/json" \
-d '{"project_path":"./","include":["ast","complexity","churn"]}'
make lint passes with pedantic and nursery standards.pmat refactor docs)pmat analyze graph-metrics for centrality analysis.pmat analyze name-similarity for fuzzy name matching.pmat analyze symbol-table for symbol extraction.pmat analyze duplicates for detecting duplicate code.This project follows strict quality standards:
Explore our comprehensive documentation to get the most out of pmat.
pmat with AI agents.For systems with low swap space, we provide a configuration tool:
make config-swap # Configure 8GB swap (requires sudo)
make clear-swap # Clear swap memory between heavy operations
The project uses a distributed test architecture for fast feedback:
# Run specific test suites
make test-unit # <10s - Core logic tests
make test-services # <30s - Service integration
make test-protocols # <45s - Protocol validation
make test-e2e # <120s - Full system tests
make test-performance # Performance regression
# Run all tests in parallel
make test-all
# Coverage analysis
make coverage-stratified
We welcome contributions! Please see our Contributing Guide for details.
# Clone and setup
git clone https://github.com/paiml/paiml-mcp-agent-toolkit
cd paiml-mcp-agent-toolkit
# Install dependencies
make install-deps
# Run tests
make test-fast # Quick validation
make test-all # Complete test suite
# Check code quality
make lint # Run extreme quality lints
make coverage # Generate coverage report
git checkout -b feature/amazing-feature)make lint and make test-fast before committingSee CONTRIBUTING.md for detailed guidelines.
Licensed under either of:
at your option.
Built with โค๏ธ by Pragmatic AI Labs