| Crates.io | bindings-generat |
| lib.rs | bindings-generat |
| version | 0.1.0 |
| created_at | 2025-11-17 00:51:00.9982+00 |
| updated_at | 2025-11-17 00:51:00.9982+00 |
| description | Automatically generate safe, idiomatic Rust wrapper crates from C/C++ libraries |
| homepage | https://github.com/ciresnave/bindings-generat |
| repository | https://github.com/ciresnave/bindings-generat |
| max_upload_size | |
| id | 1936085 |
| size | 1,204,266 |

Automatically generate safe, idiomatic Rust wrapper crates from C/C++ libraries with minimal user interaction.
bindings-generat automatically generates safe Rust wrappers from C/C++ libraries using bindgen (for raw FFI), rule-based pattern analysis (for RAII and error handling), LLM enhancement (for documentation and naming), and optional interactive clarification.
LLM integration is now fully featured and enabled by default when Ollama is available. The tool automatically detects Ollama and uses it to enhance generated bindings with better documentation, idiomatic naming, and usage examples.
Result types--no-llm)bindings-generat uses a multi-layered approach:
Rule-Based Analysis:
foo_create/foo_destroy)LLM Enhancement (Default when Ollama is available):
Interactive Refinement (Optional):
cargo install bindings-generat
# Just provide a source - output goes to ./bindings-output/
bindings-generat libraryv1.0.zip
That's it! The tool will:
./bindings-output/cargo checkOn first run, if Ollama is not installed, you'll see:
🔍 Ollama not found on your system.
bindings-generat uses AI to enhance generated bindings.
This requires Ollama (local LLM runtime).
Choose installation option:
1. System-wide install (requires admin/sudo, persists after run)
2. Portable install in temp directory (~1.3GB, auto-cleanup available)
3. Skip LLM features (generates basic bindings only)
Your choice [1/2/3]:
Recommended: Choose option 2 (portable) for a hassle-free experience with automatic cleanup.
# Simplest usage - just provide the source
# Output automatically goes to ./bindings-output/
bindings-generat libraryv1.0.zip
# From a directory
bindings-generat /path/to/library
# From a remote URL (e.g., GitHub release)
bindings-generat https://github.com/owner/repo/archive/v1.0.tar.gz
# Custom output directory
bindings-generat library.tar.gz --output custom-wrapper-rs
# With interactive mode
bindings-generat /path/to/library --interactive
# Without LLM (offline/fast mode)
bindings-generat /path/to/library --no-llm
# Specify a different LLM model
bindings-generat /path/to/library --model qwen2.5-coder:7b
bindings-generat will offer to install Ollama automatically if not detected. You'll have three options:
The portable install option downloads Ollama (~300MB) and a model (~1GB) to a temporary directory, runs it during binding generation, and optionally cleans it up afterward. Perfect for CI/CD or restricted environments.
Manual installation (optional):
# macOS/Linux
curl https://ollama.ai/install.sh | sh
ollama pull qwen2.5-coder:1.5b
# Windows
winget install Ollama.Ollama
ollama pull qwen2.5-coder:1.5b
Phase 1 (Core Functionality) - ✅ COMPLETE
Phase 2 (LLM Integration) - ✅ COMPLETE
All features are implemented and tested:
✅ Completed:
📊 Test Results:
📖 Documentation:
🎯 Next Phase:
See STATUS.md for detailed progress tracking.
Licensed under either of:
at your option.
Contributions are welcome! Please read CONTRIBUTING.md for guidelines.