| Crates.io | lnmp-quant |
| lib.rs | lnmp-quant |
| version | 0.5.16 |
| created_at | 2025-11-21 20:00:17.619458+00 |
| updated_at | 2025-12-19 10:03:09.051085+00 |
| description | Quantization and compression for LNMP embedding vectors with minimal accuracy loss |
| homepage | |
| repository | https://github.com/lnmplang/lnmp-protocol |
| max_upload_size | |
| id | 1944132 |
| size | 117,239 |
Quantization and compression for LNMP embedding vectors with minimal accuracy loss
FID Registry: All examples use official Field IDs from
registry/fids.yaml.
lnmp-quant provides efficient quantization schemes to compress embedding vectors while maintaining high semantic accuracy. It offers a spectrum of compression options from 4x to 32x:
| Scheme | Compression | Accuracy | 512-dim Quantize | 512-dim Dequantize |
|---|---|---|---|---|
| FP16 | 2x | ~99.9% | ~300 ns | ~150 ns |
| QInt8 | 4x | ~99% | 1.17 µs | 457 ns |
| QInt4 | 8x | ~95-97% | ~600 ns | ~230 ns |
| Binary | 32x | ~85-90% | ~200 ns | ~100 ns |
Add to your Cargo.toml:
[dependencies]
lnmp-quant = "0.5.2"
lnmp-embedding = "0.5.2"
use lnmp_quant::{quantize_embedding, dequantize_embedding, QuantScheme};
use lnmp_embedding::Vector;
// Create an embedding
let embedding = Vector::from_f32(vec![0.12, -0.45, 0.33, /* ... */]);
// Quantize to QInt8
let quantized = quantize_embedding(&embedding, QuantScheme::QInt8)?;
println!("Original size: {} bytes", embedding.dim * 4);
println!("Quantized size: {} bytes", quantized.data_size());
println!("Compression ratio: {:.1}x", quantized.compression_ratio());
// Dequantize back to F32
let restored = dequantize_embedding(&quantized)?;
// Verify accuracy
use lnmp_embedding::SimilarityMetric;
let similarity = embedding.similarity(&restored, SimilarityMetric::Cosine)?;
assert!(similarity > 0.99);
use lnmp_core::{LnmpValue, LnmpField, LnmpRecord, TypeHint};
use lnmp_quant::quantize_embedding;
// Quantize an embedding
let quantized = quantize_embedding(&embedding, QuantScheme::QInt8)?;
// Add to LNMP record (F512=embedding from registry)
let mut record = LnmpRecord::new();
record.add_field(LnmpField {
fid: 512, // F512=embedding
value: LnmpValue::QuantizedEmbedding(quantized),
});
// Type hint support
let hint = TypeHint::QuantizedEmbedding; // :qv
assert_eq!(hint.as_str(), "qv");
Automatically select the best scheme based on your requirements:
use lnmp_quant::adaptive::{quantize_adaptive, AccuracyTarget};
// Maximum accuracy (FP16)
let q = quantize_adaptive(&emb, AccuracyTarget::Maximum)?;
// High accuracy (QInt8)
let q = quantize_adaptive(&emb, AccuracyTarget::High)?;
// Balanced (QInt4)
let q = quantize_adaptive(&emb, AccuracyTarget::Balanced)?;
// Compact (Binary)
let q = quantize_adaptive(&emb, AccuracyTarget::Compact)?;
Efficiently process multiple embeddings with statistics tracking:
use lnmp_quant::batch::quantize_batch;
let embeddings = vec![emb1, emb2, emb3];
let result = quantize_batch(&embeddings, QuantScheme::QInt8);
println!("Processed: {}/{}", result.stats.succeeded, result.stats.total);
println!("Time: {:?}", result.stats.total_time);
for q in result.results {
if let Ok(quantized) = q {
// Use quantized vector
}
}
For detailed benchmarks, see PERFORMANCE.md.
[min_val, max_val]scale = (max_val - min_val) / 255normalized = (value - min_val) / scalequantized = int8(normalized - 128)i8 valuesvalue = (quantized + 128) * scale + min_val// Brake sensitivity embedding quantized for microsecond transfer
let brake_embedding = Vector::from_f32(sensor_data);
let quantized = quantize_embedding(&brake_embedding, QuantScheme::QInt8)?;
// Send over low-latency channel
send_to_controller(&quantized);
// 30 agents sharing embedding pool with minimal bandwidth
for agent in agents {
let q_emb = quantize_embedding(&agent.embedding(), QuantScheme::QInt8)?;
broadcast_to_pool(agent.id, q_emb);
}
// Low bandwidth, high intelligence
let edge_embedding = get_local_embedding();
let quantized = quantize_embedding(&edge_embedding, QuantScheme::QInt8)?;
// 4x smaller payload for network transfer
send_to_cloud(&quantized);
quantize_embeddingpub fn quantize_embedding(
emb: &Vector,
scheme: QuantScheme
) -> Result<QuantizedVector, QuantError>
Quantizes an F32 embedding vector using the specified scheme.
dequantize_embeddingpub fn dequantize_embedding(
q: &QuantizedVector
) -> Result<Vector, QuantError>
Dequantizes back to approximate F32 representation.
QuantizedVectorpub struct QuantizedVector {
pub dim: u32, // Vector dimension
pub scheme: QuantScheme, // Quantization scheme used
pub scale: f32, // Scaling factor
pub zero_point: i8, // Zero-point offset
pub min_val: f32, // Minimum value (for reconstruction)
pub data: Vec<u8>, // Packed quantized data
}
QuantSchemepub enum QuantScheme {
QInt8, // 8-bit signed quantization
QInt4, // 4-bit packed (future)
Binary, // 1-bit sign-based (future)
FP16Passthrough, // Half-precision float (future)
}
QuantErrorpub enum QuantError {
InvalidDimension(String),
InvalidScheme(String),
DataCorrupted(String),
EncodingFailed(String),
DecodingFailed(String),
}
Benchmarks on standard hardware (512-dimensional embeddings):
quantize_512dim time: [1.17 µs]
dequantize_512dim time: [457 ns]
roundtrip_512dim time: [1.63 µs]
accuracy cosine: >0.99
quantize_512dim time: [~600 ns]
dequantize_512dim time: [~230 ns]
compression ratio: 8.0x
accuracy cosine: >0.95
quantize_512dim time: [~200 ns]
dequantize_512dim time: [~100 ns]
compression ratio: 32.0x
accuracy similarity: >0.85
Run benchmarks:
cargo bench -p lnmp-quant
See examples/ directory:
quant_basic.rs - Basic quantization/dequantizationlnmp_integration.rs - Integration with LNMP recordsquant_debug.rs - Debugging quantization behaviorRun an example:
cargo run -p lnmp-quant --example lnmp_integration
# Run all tests
cargo test -p lnmp-quant
# Run roundtrip tests only
cargo test -p lnmp-quant --test quant_roundtrip
# Run accuracy tests
cargo test -p lnmp-quant --test accuracy_tests
:qv)Contributions welcome! Please see CONTRIBUTING.md for guidelines.
Licensed under either of:
at your option.
lnmp-core - Core LNMP type definitionslnmp-embedding - Vector embedding support with delta encodinglnmp-codec - Binary codec for LNMP protocol