| Crates.io | reductive |
| lib.rs | reductive |
| version | 0.9.0 |
| created_at | 2019-02-04 09:27:00.286701+00 |
| updated_at | 2021-12-03 11:08:04.428322+00 |
| description | Optimized vector quantization for dense vectors |
| homepage | https://github.com/finalfusion/reductive |
| repository | https://github.com/finalfusion/reductive |
| max_upload_size | |
| id | 112610 |
| size | 91,843 |
Training of optimized product quantizers requires a LAPACK implementation. For
this reason, training of the Opq and GaussianOpq quantizers is feature-gated
by the opq-train feature. This feature must be enabled if you want to use
Opq or GaussianOpq:
[dependencies]
reductive = { version = "0.7", features = ["opq-train"] }
This also requires that a crate that links a LAPACK library is added as a
dependency, e.g. accelerate-src, intel-mkl-src, openblas-src, or
netlib-src.
You can run all tests on Linux, including tests for optimized product
quantizers, using the intel-mkl-test feature:
$ cargo test --features intel-mkl-test
All tests can be run on macOS with the accelerate-test feature:
$ cargo test --features accelerate-test
reductive uses Rayon to parallelize quantizer training. However,
multi-threaded OpenBLAS is known to
conflict
with application threading. Is you use OpenBLAS, ensure that threading
is disabled, for instance by setting the number of threads to 1:
$ export OPENBLAS_NUM_THREADS=1