Crates.io | winter-crypto |
lib.rs | winter-crypto |
version | 0.10.1 |
source | src |
created_at | 2021-08-04 06:20:44.625208 |
updated_at | 2024-10-30 15:07:54.948664 |
description | Cryptographic library for the Winterfell STARK prover/verifier |
homepage | |
repository | https://github.com/novifinancial/winterfell |
max_upload_size | |
id | 431403 |
size | 239,716 |
This crate contains modules with cryptographic operations needed in STARK proof generation and verification.
Hash module defines a set of hash functions available for cryptographic operations. Currently, the following hash functions are supported:
Rescue hash function is implemented according to the Rescue Prime specifications with the following exception:
merge()
function (e.g., for building a Merkle tree) and when we hash 8 field elements as a sequence of elements using hash_elements()
function. However, this also means that our instantiation of Rescue Prime cannot be used in a stream mode as the number of elements to be hashed must be known upfront.RP64_256
, we also make the following modifications:
RPJive64_256
instantiation of Rescue Prime using Jive as compression mode implements similar modifications as Rp64_256
, at the exception of its padding rule which implements the Hirose padding. In addition, because of the use of Jive, the output of the hash function is not the same when we hash 8 field elements as a sequence of elements using hash_elements()
function and when we compress 8 field elements into 4 (e.g., for building a Merkle tree) using the 2-to-1 Jive compression mode.The parameters used to instantiate the functions are:
RP64_256
:
RPJive64_256
:
RP62_248
:
Field: 62-bit prime field with modulus 262 - 111 * 239 + 1.
State width: 12 field elements.
Capacity size: 4 field elements.
Digest size: 4 field elements (can be serialized into 31 bytes).
Number of founds: 7.
S-Box degree: 3.
Target security level: 124-bits.
One of the core operations performed during STARK proof generation is construction of Merkle trees. We care greatly about building these trees as quickly as possible, and thus, for the purposes of STARK protocol, 2-to-1 hash operation (e.g., computing a hash of two 32-byte values) is especially important. The table below contains rough benchmarks for computing a 2-to-1 hash for all currently implemented hash functions.
CPU | BLAKE3_256 | SHA3_256 | RP64_256 | RPJ64_256 | RP62_248 |
---|---|---|---|---|---|
Apple M1 Pro | 76 ns | 227 ns | 5.1 us | 3.8 us | 7.1 us |
AMD Ryzen 9 5950X @ 3.4 GHz | 62 ns | 310 ns | 5.2 us | 3.9 us | 6.9 us |
Core i9-9980KH @ 2.4 GHz | 66 ns | 400 ns | - | - | 6.6 us |
Core i5-7300U @ 2.6 GHz | 81 ns | 540 ns | - | - | 9.5 us |
Core i5-4300U @ 1.9 GHz | 106 ns | 675 ns | - | - | 13.9 us |
As can be seen from the table, BLAKE3 is by far the fastest hash function, while our implementations of algebraic hashes are 70x slower than BLAKE3 and 20x slower than SHA3.
Merkle module contains an implementation of a Merkle tree which supports batch proof generation and verification. Batch proofs are based on the Octopus algorithm described here.
This crate can be compiled with the following features:
std
- enabled by default and relies on the Rust standard library.concurrent
- implies std
and also enables multi-threaded execution for some of the crate functions.no_std
does not rely on the Rust standard library and enables compilation to WebAssembly.To compile with no_std
, disable default features via --no-default-features
flag.
When compiled with concurrent
feature enabled, the following operations will be executed in multiple threads:
MerkleTree::new()
- i.e., a Merkle tree will be constructed in multiple threads.The number of threads can be configured via RAYON_NUM_THREADS
environment variable, and usually defaults to the number of logical cores on the machine.
This project is MIT licensed.