Crates.io | recursum |
lib.rs | recursum |
version | 0.4.0 |
source | src |
created_at | 2020-07-27 16:52:35.262048 |
updated_at | 2020-08-13 10:51:07.51605 |
description | Quickly hash all files in a directory tree |
homepage | |
repository | https://github.com/clbarnes/recursum |
max_upload_size | |
id | 270135 |
size | 55,068 |
Rust script to hash many files, quickly.
There are 3 modes of operation.
Parallelises file discovery (in usage #1) and hashing. Default hasher is not cryptographically secure.
By default, {path}\t{hex_digest}
is printed to stdout.
This is reversed compared to most hashing utilities (md5sum
, sha1sum
etc.) with the intention of making it easier to sort deterministically by file name, and because tabs (disallowed by many file system interfaces) are more reliable to split on than double spaces (an easy typo in file names).
However, the --compatible
switch exists to print {hex_digest} {path}
.
Ongoing progress information, and a final time and rate, are printed to stderr.
Note that most hashers, particularly fast non-crypto hashes, will be faster than slower storage media like disks, so the gains from using many hashing threads may saturate quickly.
Contributions welcome.
With cargo
installed (get it with rustup):
cargo install recursum
recursum
Hash lots of files fast, in parallel.
USAGE:
recursum [FLAGS] [OPTIONS] <input>...
FLAGS:
-c, --compatible "Compatible mode", which prints the hash first and changes the default separator to double-
space, as used by system utilities like md5sum
-h, --help Prints help information
-q, --quiet Do not show progress information
-V, --version Prints version information
OPTIONS:
-d, --digest-length <digest-length> Maximum length of output hash digests
-s, --separator <separator> Separator. Defaults to tab unless --compatible is given. Use "\t" for tab and
"\0" for null (cannot be mixed with other characters)
-t, --threads <threads> Hashing threads
-w, --walkers <walkers> Directory-walking threads, if <input> is a directory
ARGS:
<input>... One or more file names, one directory name (every file recursively will be hashed, in depth first
order), or '-' for getting list of files from stdin (order is conserved)
Example:
fd --threads 1 --type file | recursum --threads 10 --digest 64 - > my_checksums.txt
This could be more efficient, and have better logging, than using --exec
or | xargs
.
Note that --separator
does not understand escape sequences.
In order to pass e.g. a tab as the separator, use recursum -s $(echo '\t') -
Broadly speaking, recursum
uses >= 1 thread to populate a queue of files to hash; either
Simulaneously, items are popped off this queue and executed using tokio's threaded scheduler. There should be no context switches within each task; the tasks are processed in the same order that they are received. The main thread fetches results (in the same order) and prints them to stdout.
find
(or fd
) with -exec
(--exec
), e.g.
find . -type f -exec md5sum {} \;
find
is single-threaded, and -exec
flattens the list of found files, passing each as an additional argument to the hashing utility.
This can break if the number of files is large.
Additionally, many built-in hashing utilities are not multi-threaded; furthermore, the utility is not actually called until the file list has been populated.
There you can also pipe a list of arguments to xargs
, which can parallelise with -P
and restrict the number of arguments given with -n
:
find . -type f -print0 | xargs -0 -P 8 -n 1 -I _ md5sum "_"
This spawns a new shell for every invocation, which could be problematic, and may not make as good use of the CPU as there can be no communication between processes.
Even better would be to use parallel
in "xargs mode".
There will be some overhead to the CPU due to multiple executions of the checksum tool, and RAM due to the way parallel buffers its output.
find . -type f | parallel -X md5sum
These tools are far more mature than recursum, so they may work better for you.