| Crates.io | quichash |
| lib.rs | quichash |
| version | 0.0.18 |
| created_at | 2026-01-09 18:46:14.652758+00 |
| updated_at | 2026-01-09 18:54:39.703194+00 |
| description | High-performance cryptographic hash utility with SIMD optimization |
| homepage | https://github.com/vyrti/quichash |
| repository | https://github.com/vyrti/quichash |
| max_upload_size | |
| id | 2032615 |
| size | 429,779 |
High-performance cryptographic hash utility with SIMD optimization.
--hdd flag for old mechanical drives*, ?, and [...] patterns in file/directory argumentscargo install quichash
Or build from source:
cargo build --release
# Hash a file (uses blake3 by default)
hash myfile.txt
# Hash text
hash --text "hello world"
# Hash from stdin
cat myfile.txt | hash
# Scan directory (parallel by default)
hash scan -d ./my_dir -b hashes.db
# Scan on old HDD (sequential)
hash scan -d ./my_dir -b hashes.db --hdd
# Verify
hash verify -b hashes.db -d ./my_dir
# Analyze database
hash analyze -d hashes.db
# List algorithms
hash list
hash myfile.txt # Uses blake3 by default
hash myfile.txt -a sha256 # Specify algorithm
hash myfile.txt -a sha256 -a blake3 # Multiple algorithms
hash largefile.iso -f # Fast mode
hash myfile.txt -b output.txt # Save to file
hash myfile.txt --json # JSON output
Hash multiple files using wildcard patterns:
hash "*.txt" -a sha256 # All .txt files
hash "file?.bin" -a sha256 # file1.bin, fileA.bin, etc.
hash "[abc]*.jpg" -a sha256 # Files starting with a, b, or c
hash "img202405*.jpg" -a sha256 # All images from May 2024
Patterns work with all commands:
hash scan -d "data/*/hashes" -a sha256 -b output.db # Multiple directories
hash verify -b "*.db" -d "data/*" --json # Multiple databases/dirs
hash --text "hello world" -a sha256 # Hash text
cat myfile.txt | hash -a sha256 # Hash from stdin
hash scan -d /path/to/dir -b hashes.db # Basic (blake3, parallel)
hash scan -d /path/to/dir -b hashes.db --hdd # Sequential for old HDDs
hash scan -d /path/to/dir -a sha256 -b hashes.db # Custom algorithm
hash scan -d /path/to/dir -b hashes.db -f # Fast mode
hash scan -d /path/to/dir -b hashes.db -f --hdd # Fast mode, sequential
hash scan -d /path/to/dir -b hashes.db --compress # Compressed
hash scan -d /path/to/dir -b hashes.db --format hashdeep # Hashdeep format
hash verify -b hashes.db -d /path/to/dir # Parallel (default)
hash verify -b hashes.db -d /path/to/dir --hdd # Sequential for old HDDs
hash verify -b hashes.db -d /path/to/dir --json # JSON output
The verification engine uses parallel processing by default for significantly faster verification:
# Parallel verification (default, 2-4x faster)
hash verify -b hashes.db -d /path/to/dir
# Sequential verification (for old HDDs)
hash verify -b hashes.db -d /path/to/dir --hdd
Performance improvements:
Parallel mode (default):
Sequential mode (--hdd flag):
hash verify -b hashes.db -d /path/to/dir # Verify
hash verify -b hashes.db.xz -d /path/to/dir # Compressed
hash verify -b hashes.db -d /path/to/dir --json # JSON
Output shows: Matches, Mismatches, Missing files, New files
Compare two hash databases to identify changes, moves, and differences:
hash compare db1.txt db2.txt # Compare two databases
hash compare db1.txt db2.txt -b report.txt # Save report to file
hash compare db1.txt db2.txt --format json # JSON output
hash compare db1.txt db2.txt --format hashdeep # Hashdeep audit format
hash compare db1.txt.xz db2.txt.xz # Compare compressed databases
hash compare db1.txt db2.txt.xz # Mix compressed and plain
Output shows:
Analyze a hash database to view statistics, duplicates, and potential space savings:
hash analyze -d hashes.db # Analyze database
hash analyze -d hashes.db --json # JSON output
hash analyze -d hashes.db -b report.txt # Save report to file
hash analyze -d hashes.db.xz # Analyze compressed database
Output shows:
Find and report duplicate files based on hash comparison:
hash dedup -d /path/to/dir # Find duplicates (dry-run)
hash dedup -d /path/to/dir -b report.txt # Save report to file
hash dedup -d /path/to/dir -f # Fast mode
hash dedup -d /path/to/dir --json # JSON output
Output shows duplicate groups with file paths and sizes.
hash benchmark # Benchmark all algorithms
hash benchmark -s 500 # Custom data size
hash list # List algorithms
hash list --json # JSON output
| Command | Option | Description |
|---|---|---|
FILE |
File or wildcard pattern to hash (omit for stdin) | |
-t, --text <TEXT> |
Hash text string | |
-a, --algorithm <ALG> |
Algorithm (default: blake3) | |
-b, --output <FILE> |
Write to file | |
-f, --fast |
Fast mode (samples 300MB) | |
--json |
JSON output | |
| scan | -d, --directory <DIR> |
Directory or wildcard pattern to scan |
-a, --algorithm <ALG> |
Algorithm (default: blake3) | |
-b, --database <FILE> |
Output database | |
--hdd |
Sequential mode for old HDDs (default: parallel) | |
-f, --fast |
Fast mode | |
--format <FMT> |
standard or hashdeep | |
--compress |
LZMA compression | |
--json |
JSON output | |
| verify | -b, --database <FILE> |
Database file or wildcard pattern |
-d, --directory <DIR> |
Directory or wildcard pattern to verify | |
--json |
JSON output | |
| compare | DATABASE1 |
First database file (supports .xz) |
DATABASE2 |
Second database file (supports .xz) | |
-b, --output <FILE> |
Write report to file | |
--format <FMT> |
plain-text, json, or hashdeep | |
| analyze | -d, --database <FILE> |
Database file to analyze (supports .xz) |
-b, --output <FILE> |
Write report to file | |
--json |
JSON output | |
| dedup | -d, --directory <DIR> |
Directory to scan for duplicates |
-f, --fast |
Fast mode | |
-b, --output <FILE> |
Write report to file | |
--json |
JSON output | |
| benchmark | -s, --size <MB> |
Data size (default: 100) |
--json |
JSON output |
Exclude files using gitignore-style patterns:
cat > /path/to/dir/.hashignore << 'EOF'
*.log
*.tmp
build/
node_modules/
!important.log
EOF
hash scan -d /path/to/dir -a sha256 -b hashes.db
Patterns: *.ext, dir/, !pattern, #comments, **/*.ext
Standard (default):
<hash> <algorithm> <mode> <filepath>
Hashdeep: CSV format with file size, compatible with hashdeep tool
JSON: Structured output for automation
| Algorithm | Throughput | Use Case |
|---|---|---|
| xxHash3 | 10-30 GB/s | Non-crypto, max speed |
| BLAKE3 | 1-3 GB/s | Crypto, fastest |
| SHA-512 | 600-900 MB/s | Crypto, 64-bit |
| SHA-256 | 500-800 MB/s | Crypto, common |
| SHA3-256 | 200-400 MB/s | Post-quantum |
Tips:
--hdd for old mechanical drives (sequential processing)-f for large files (10-100x faster)RUSTFLAGS="-C target-cpu=native" for best performanceFast Mode Speedup:
Samples 300MB (first/middle/last 100MB) instead of entire file.
Good for: Quick checks, large files, backups Not for: Full verification, forensics, small files
# Verify downloaded file
hash downloaded-file.iso -a sha256
# Backup verification (parallel by default)
hash scan -d /data -b backup.db
hash verify -b backup.db -d /data
# Backup on old HDD (sequential processing)
hash scan -d /data -b backup.db --hdd
hash verify -b backup.db -d /data
# Monitor changes
hash scan -d /etc/config -b baseline.db
hash verify -b baseline.db -d /etc/config
# Compare two snapshots
hash scan -d /data -b snapshot1.db
# ... time passes ...
hash scan -d /data -b snapshot2.db
hash compare snapshot1.db snapshot2.db -b changes.txt
# Analyze database for duplicates and stats
hash analyze -d media.db # View stats and duplicates
hash analyze -d media.db --json # JSON output for automation
# Find duplicates in directory
hash dedup -d /media # Quick duplicate scan
# Forensic analysis
hash scan -d /evidence -a sha3-256 -b evidence.db
hash scan -d /evidence -a sha256 -b evidence.txt --format hashdeep
# Quick checksums (blake3 is default)
hash large-backup.tar.gz -f
hash scan -d /backups -b checksums.db -f
# Automation
hash verify -b hashes.db -d /data --json | jq '.report.mismatches'
hash compare db1.db db2.db --format json | jq '.summary'
Recommended:
Deprecated:
Non-crypto (trusted environments):
Automatic support for SSE, AVX, AVX2, AVX-512 (x86_64) and NEON (ARM).
Verify: cargo test --release --test simd_verification -- --nocapture
Supported patterns:
* - Matches any number of characters (e.g., *.txt, file*)? - Matches exactly one character (e.g., file?.bin)[...] - Matches any character in brackets (e.g., [abc]*.jpg)Examples:
hash "*.txt" -a sha256 # All .txt files in current dir
hash "data/*.bin" -a sha256 # All .bin files in data/
hash "file?.txt" -a sha256 # file1.txt, fileA.txt, etc.
hash "[abc]*.jpg" -a sha256 # Files starting with a, b, or c
hash scan -d "backup/*/data" -a sha256 -b db.txt # Multiple directories
hash verify -b "*.db" -d "data/*" # All .db files against all data dirs
Notes:
| Issue | Solution |
|---|---|
| Unsupported algorithm | Run hash list to see available algorithms |
| Permission errors | Use sudo hash scan -d /protected/dir ... |
| Slow performance | Use -p for parallel, -f for fast mode, or BLAKE3 |
| Fast mode not working | Fast mode only works with files (not stdin/text) |
| .hashignore not working | Check file location: /path/to/dir/.hashignore |
| Wildcard pattern not matching | Ensure pattern is quoted (e.g., "*.txt" not *.txt) |
| No files match pattern | Check pattern syntax and file locations |
We welcome contributions to QuicHash! To contribute, you must certify that you have the right to submit your contribution and agree to license it under the project's dual MIT/Apache-2.0 license.
QuicHash uses the Developer Certificate of Origin (DCO) process. This is a lightweight way for contributors to certify that they wrote or otherwise have the right to submit code or documentation to an open source project.
All contributions to QuicHash are made under the same dual MIT/Apache-2.0 license as the project itself. By signing off on your commits, you agree that your contributions will be licensed under these same terms, with no additional restrictions.
Contributors sign-off that they adhere to these requirements by adding a Signed-off-by line to commit messages.
This is my commit message
Signed-off-by: Random J Developer <random@developer.example.org>
Git even has a -s command line option to append this automatically to your commit message:
$ git commit -s -m 'This is my commit message'
QuicHash is dual-licensed under either:
at your option.