| Crates.io | koru-lambda-core |
| lib.rs | koru-lambda-core |
| version | 1.2.0 |
| created_at | 2025-11-18 19:43:08.414267+00 |
| updated_at | 2025-11-24 01:46:07.151789+00 |
| description | A minimal axiomatic system for distributed computation |
| homepage | |
| repository | https://github.com/swyrknt/koru-lambda-core |
| max_upload_size | |
| id | 1938901 |
| size | 513,405 |
A minimal axiomatic system for computation based on distinction calculus. This engine implements a timeless, self-consistent computational substrate where complex distributed system properties arise from simple synthesis operations.
[dependencies]
koru-lambda-core = "0.1.0"
use koru_lambda_core::{DistinctionEngine, Distinction};
fn main() {
let mut engine = DistinctionEngine::new();
// Synthesize the primordial distinctions
let existence = engine.synthesize(engine.d0(), engine.d1());
println!("Created distinction: {}", existence.id());
// Build complex structures
let order = engine.synthesize(&existence, engine.d0());
let chaos = engine.synthesize(&existence, engine.d1());
let nature = engine.synthesize(&order, &chaos);
println!("Nature distinction: {}", nature.id());
}
import { Engine, NetworkAgent } from './koru-wrapper.js';
const engine = new Engine();
// Synthesize distinctions
const d2 = engine.synthesize(engine.d0Id(), engine.d1Id());
console.log(`Created distinction: ${d2}`);
// Network consensus
const agent = new NetworkAgent(engine);
agent.joinPeer('validator_0');
agent.joinPeer('validator_1');
console.log(`Leader: ${agent.getLeader()}`);
Build the universal WASM artifact:
./scripts/build_universal.sh
This produces a single artifact that runs on browsers, Node.js, Deno, Bun, Go, Kotlin, Swift, Python, and embedded systems.
The engine exhibits:
koru-lambda-core/
âââ src/
â âââ engine.rs # Core synthesis (265 lines)
â âââ primitives.rs # Data canonicalization
â âââ wasm.rs # WASM bindings with binary marshalling
â âââ lib.rs # Public API
â âââ subsystems/
â âââ validator.rs # Consensus validation (SPoC)
â âââ compactor.rs # Structural compaction (R â U)
â âââ network.rs # Forkless P2P consensus
â âââ runtime.rs # Async P2P networking (libp2p)
â âââ parallel.rs # Multi-core processing
âââ scripts/
â âââ build_universal.sh # Universal WASM artifact builder
âââ tests/ # Comprehensive test suite
â âââ end_to_end.rs # Distributed system tests
â âââ runtime_integration.rs # Async runtime validation
â âââ integration_tests.rs # Falsification suite
â âââ parallel_integration.rs # Concurrency tests
â âââ throughput_verification.rs # Performance benchmarks
âââ benches/
âââ performance.rs # Criterion benchmarks
The project includes a comprehensive falsification test suite:
#[test]
fn test_structural_coherence() {
// Tests whether graph topology correlates with causal evolution
// Falsifies if: Spatially adjacent nodes exhibit large causal age differences
}
#[test]
fn test_mathematical_invariants() {
// Tests whether mathematical structures arise as deterministic patterns
// Falsifies if: Mathematical truths depend on construction method
}
#[test]
fn test_structural_feedback() {
// Tests for high-coherence structural feedback
// Falsifies if: No high-coherence structures arise
}
Run the test suite:
cargo test
cargo test --release # For optimized builds
Benchmark the engine:
cargo bench
Core Operations:
Concurrency:
Distributed Consensus:
Storage Efficiency:
WASM (Universal Artifact):
use koru_lambda_core::{
DistinctionEngine, ParallelBatchProcessor, ParallelAction,
ProcessingStrategy, TransactionBatch, TransactionAction, LocalCausalAgent,
};
use std::sync::Arc;
let engine = Arc::new(DistinctionEngine::new());
let mut processor = ParallelBatchProcessor::new(&engine);
// Create transaction batches
let batch = TransactionBatch {
transactions: vec![
TransactionAction { nonce: 0, data: vec![1, 2, 3] },
TransactionAction { nonce: 1, data: vec![4, 5, 6] },
],
previous_root: processor.get_current_root().id().to_string(),
};
// Process via LocalCausalAgent trait
let action = ParallelAction {
batches: vec![batch],
strategy: ProcessingStrategy::Sequential,
};
let new_root = processor.synthesize_action(action, &engine);
println!("Processed {} batches", processor.batches_processed());
use koru_lambda_core::{DistinctionEngine, ParallelSynthesizer};
use std::sync::Arc;
let engine = Arc::new(DistinctionEngine::new());
let synthesizer = ParallelSynthesizer::new(engine.clone());
// Parallelize byte canonicalization using Rayon
let data: Vec<u8> = (0..100_000).map(|i| (i % 256) as u8).collect();
let results = synthesizer.canonicalize_bytes_parallel(data);
println!("Canonicalized {} bytes in parallel", results.len());
use koru_lambda_core::{DistinctionEngine, Canonicalizable};
use std::sync::Arc;
use std::thread;
let engine = Arc::new(DistinctionEngine::new());
let mut handles = vec![];
// Spawn multiple threads for concurrent synthesis
for thread_id in 0..10 {
let engine_clone = Arc::clone(&engine);
let handle = thread::spawn(move || {
let byte = (thread_id % 256) as u8;
byte.to_canonical_structure(&engine_clone)
});
handles.push(handle);
}
// Collect results - all synthesis is thread-safe via DashMap
for handle in handles {
let result = handle.join().unwrap();
println!("Result: {}", result.id());
}
use koru_lambda_core::{DistinctionEngine, ByteMapping};
let mut engine = DistinctionEngine::new();
let data = "Hello, World!".as_bytes();
// Map arbitrary data to distinction structures
for &byte in data {
let distinction = ByteMapping::map_byte_to_distinction(byte, &mut engine);
// Use distinction for storage or computation
}
We welcome contributions! Please see our Contributing Guide for details.
git checkout -b feature/amazing-feature)git commit -m 'Add amazing feature')git push origin feature/amazing-feature)This project is licensed under either of:
at your option.
Note: This is research software implementing novel distributed consensus mechanisms. While production-ready from an engineering perspective, the axiomatic approach is experimental.