| Crates.io | netabase |
| lib.rs | netabase |
| version | 0.0.5 |
| created_at | 2025-10-26 23:06:33.753185+00 |
| updated_at | 2025-11-24 14:38:51.624327+00 |
| description | A peer-to-peer networking layer built on libp2p with integrated type-safe storage, enabling distributed applications with automatic data synchronization across native and WASM environments. |
| homepage | |
| repository | https://github.com/newsnet-africa/netabase.git |
| max_upload_size | |
| id | 1901987 |
| size | 614,923 |
A peer-to-peer networking layer built on libp2p with integrated type-safe storage, enabling distributed applications with automatic data synchronization across native and WASM environments.
Netabase has begun integrating paxakos for distributed consensus. Implementation roadmap:
Implement LogEntry trait for netabase_store definitions
id() method returning unique identifier for each entryNetabaseDefinitionTrait to require LogEntry bound Create State implementation for distributed state management
apply(&mut self, entry: &LogEntry) - process entries and update statefreeze(&self) - create immutable snapshot of current statecluster_at(&self, round: RoundNum) - return cluster membership at roundconcurrency(&self) - return parallelism level for round processing Implement NodeInfo trait for peer identification
PeerId Create PaxosComm unificator for libp2p integration
Communicator trait with 12 associated types:
Node, RoundNum, CoordNum, LogEntry, ErrorSendPrepare, SendProposal, SendCommit, SendCommitByIdAbstain, Yea, Naysend_prepare(coord, round, receivers) - broadcast prepare messagessend_proposal(coord, round, entry, receivers) - propose log entrysend_commit(coord, round, entry, receivers) - commit with full entrysend_commit_by_id(coord, round, entry_id, receivers) - commit by ID only Create custom libp2p protocol handler (/paxos/1.0.0)
request_response behavior Add PaxosBehaviour to libp2p swarm
NodeNetworkBehaviour traitIntegrate paxakos node with netabase lifecycle
Node in start_swarm()NodeBuilder with custom Communicator and Statestop_swarm()Implement consensus-backed operations
put_record_consensus(&mut self, record: D) - append via paxakosCommit<S, R, P> futures and apply outcomesAdd paxakos decorations
heartbeats - node liveness monitoringautofill - automatic log gap fillingcatch-up - synchronize lagging nodesmaster-leases - optimize read-only operationsImplement cluster management
Performance optimization
Comprehensive testing
Documentation & examples
Key Design Decisions:
paxos feature flag (native-only)Dependencies:
paxakos = "0.13.0" (already added to netabase_store)P2P Networking:
Cross-Platform Support:
Integrated Storage:
netabase_store for type-safe data managementRecord Distribution:
Type-Safe Operations:
Event System:
Add to your Cargo.toml:
[dependencies]
netabase = "0.0.3"
netabase_store = "0.0.3"
netabase_deps = "0.0.3"
# Required for macros to work
bincode = { version = "2.0", features = ["serde"] }
serde = { version = "1.0", features = ["derive"] }
strum = { version = "0.27.2", features = ["derive"] }
derive_more = { version = "2.0.1", features = ["from", "try_into", "into"] }
# Runtime dependencies
tokio = { version = "1.0", features = ["full"] }
anyhow = "1.0"
use netabase_store::netabase_definition_module;
#[netabase_definition_module(ChatDefinition, ChatKeys)]
pub mod chat {
use netabase_store::{NetabaseModel, netabase};
#[derive(NetabaseModel, bincode::Encode, bincode::Decode, Clone, Debug)]
#[netabase(ChatDefinition)]
pub struct Message {
#[primary_key]
pub id: String,
pub author: String,
pub content: String,
pub timestamp: i64,
#[secondary_key]
pub room_id: String,
}
}
use chat::*;
use netabase::Netabase;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Create a netabase instance with persistent storage
let mut netabase = Netabase::<ChatDefinition>::new_with_path("./chat_db")?;
// Start the networking swarm
netabase.start_swarm().await?;
println!("Netabase started and listening for peers!");
Ok(())
}
// Create a message
let message = Message {
id: "msg_123".to_string(),
author: "Alice".to_string(),
content: "Hello, World!".to_string(),
timestamp: chrono::Utc::now().timestamp(),
room_id: "general".to_string(),
};
// Store locally and publish to the DHT
let result = netabase.put_record(message).await?;
println!("Message published! Result: {:?}", result);
// Query a specific record by key
let key = MessageKey::Primary(MessagePrimaryKey("msg_123".to_string()));
let result = netabase.get_record(key).await?;
// Query local records
let local_messages = netabase.query_local_records(Some(10)).await?;
println!("Found {} local messages", local_messages.len());
// Advertise as a provider for a key
let key = MessageKey::Primary(MessagePrimaryKey("msg_123".to_string()));
netabase.start_providing(key.clone()).await?;
println!("Now providing this message");
// Find providers for a key
let providers_result = netabase.get_providers(key).await?;
match providers_result {
libp2p::kad::QueryResult::GetProviders(Ok(get_providers_ok)) => {
use libp2p::kad::GetProvidersOk;
match get_providers_ok {
GetProvidersOk::FoundProviders { providers, .. } => {
println!("Found {} providers", providers.len());
}
GetProvidersOk::FinishedWithNoAdditionalRecord { .. } => {
println!("No providers found");
}
}
}
_ => {}
}
use netabase::NetabaseSwarmEvent;
// Subscribe to network events
let mut event_receiver = netabase.subscribe_to_broadcasts();
// Spawn a background task to handle events
tokio::spawn(async move {
while let Ok(event) = event_receiver.recv().await {
match &event.0 {
libp2p::swarm::SwarmEvent::ConnectionEstablished { peer_id, .. } => {
println!("✓ Connected to peer: {}", peer_id);
}
libp2p::swarm::SwarmEvent::Behaviour(behaviour_event) => {
// Handle mDNS, Kad, Identify events
println!("Behaviour event: {:?}", behaviour_event);
}
_ => {}
}
}
});
Netabase supports multiple data models in a single network:
#[netabase_definition_module(AppDefinition, AppKeys)]
mod app {
use super::*;
#[derive(NetabaseModel, Clone, Debug, bincode::Encode, bincode::Decode, serde::Serialize, serde::Deserialize)]
#[netabase(AppDefinition)]
pub struct User {
#[primary_key]
pub id: u64,
pub username: String,
#[secondary_key]
pub email: String,
}
#[derive(NetabaseModel, Clone, Debug, bincode::Encode, bincode::Decode, serde::Serialize, serde::Deserialize)]
#[netabase(AppDefinition)]
pub struct Post {
#[primary_key]
pub id: u64,
pub title: String,
pub author_id: u64,
}
}
let mut app = Netabase::<AppDefinition>::new_with_path("./app_db")?;
app.start_swarm().await?;
// Each model type is independently managed
app.put_record(user).await?;
app.put_record(post).await?;
use netabase::network::config::{NetabaseConfig, StorageBackend};
// Use Redb instead of default Sled
let config = NetabaseConfig::with_backend(StorageBackend::Redb);
let netabase = Netabase::<ChatDefinition>::new_with_config(config)?;
// Or specify both path and backend
let netabase = Netabase::<ChatDefinition>::new_with_path_and_backend(
"./my_db",
StorageBackend::Redb
)?;
// Get current DHT mode
let mode = netabase.get_mode().await?;
println!("Current mode: {:?}", mode);
// Switch to client mode (read-only, lower resource usage)
netabase.set_mode(Some(libp2p::kad::Mode::Client)).await?;
// Switch to server mode (full participation)
netabase.set_mode(Some(libp2p::kad::Mode::Server)).await?;
use libp2p::{Multiaddr, PeerId};
// Add a known peer
let peer_id: PeerId = "12D3KooW...".parse()?;
let address: Multiaddr = "/ip4/192.168.1.100/tcp/4001".parse()?;
netabase.add_address(peer_id, address).await?;
// Bootstrap to join the DHT network
let result = netabase.bootstrap().await?;
println!("Bootstrap result: {:?}", result);
// Remove a peer
netabase.remove_peer(peer_id).await?;
Netabase Struct: Main API entry point
Network Layer (internal):
NetabaseBehaviour: libp2p network behaviourNetabaseStore: Unified storage backend for DHTStorage Layer (netabase_store):
Event System:
Application
↓ put_record()
Netabase
├─→ Command Channel → Swarm Handler
│ ↓
│ NetabaseStore (local)
│ ↓
│ Kademlia DHT
│ ↓
│ Remote Peers
│
└─→ Broadcast Channel ← Swarm Events
↓
Event Subscribers
Netabase builds on netabase_store for its storage layer, which provides excellent type safety and multi-backend support. However, this abstraction does come with some performance overhead (typically 5-10%). For applications where maximum performance is critical and you don't need the networking features, consider using netabase_store directly.
The main overhead sources are:
We're actively working to reduce this overhead while maintaining type safety and the clean API.
UniFFI Integration: We're planning to add UniFFI support to enable using netabase from other languages (Python, Kotlin/Swift, etc.):
This will make it possible to build distributed applications in Python, Swift, or Kotlin that can seamlessly communicate with Rust-based netabase nodes.
P2P Network Profiles: Planned features for easier distributed application development:
| Feature | Native | WASM |
|---|---|---|
| TCP | ✅ | ❌ |
| QUIC | ✅ | ❌ |
| mDNS | ✅ | ❌ |
| Kad DHT | ✅ | 🚧 |
| Sled Backend | ✅ | ❌ |
| Redb Backend | ✅ | ❌ |
| IndexedDB | ❌ | 🚧 |
🚧 = Planned for future release
See the examples/ directory:
cargo run --example simple_mdns_chat --features native alice
# In another terminal
cargo run --example simple_mdns_chat --features native bob
# Run all tests (native)
cargo test --features native
# Run a specific test
cargo test --features native test_name
# Build with release optimizations
cargo build --release --features native
new() - Create with defaultsnew_with_path(path) - Custom database pathnew_with_config(config) - Custom configurationstart_swarm() - Start networkingstop_swarm() - Shutdown gracefullysubscribe_to_broadcasts() - Get event receiverput_record(model) - Store and publishget_record(key) - Query networkremove_record(key) - Remove locallyquery_local_records(limit) - Query local storestart_providing(key) - Advertise as providerstop_providing(key) - Stop advertisingget_providers(key) - Find providersbootstrap() - Join DHT networkadd_address(peer_id, addr) - Add peerremove_address(peer_id, addr) - Remove addressremove_peer(peer_id) - Remove peerget_mode() - Query DHT modeset_mode(mode) - Change DHT modeget_protocol_names() - Get protocol infoNetabase includes a comprehensive test suite to ensure reliability and correctness.
Unit Tests: Core functionality tests
cargo test --lib
Integration Tests: Multi-node P2P tests using std::process::Command
# Basic P2P tests
cargo test --test p2p_integration_tests -- --ignored --test-threads=1
# Advanced DHT tests
cargo test --test dht_advanced_tests -- --ignored --test-threads=1
# Chat application tests
cargo test --test chat_integration_tests -- --ignored --test-threads=1
Build Verification: Ensures examples, doctests, and benchmarks compile
cargo test --test build_verification
Network Topology Tests: Inter-process P2P communication tests with various network configurations
# Run all network topology tests
cargo test --test network_topology_tests --features native -- --ignored --test-threads=1
# Run specific test
cargo test --test network_topology_tests test_two_node_basic --features native -- --ignored
Available tests:
test_two_node_basic: Simple two-node communication (5 messages)test_two_node_many_messages: Two nodes with 20 messagestest_multi_sender_single_receiver: 3 senders, 1 receivertest_message_content_integrity: Verifies message content is preservedWASM Compilation Tests: Verifies WASM target compilation
cargo test --test wasm_compilation
Benchmarks: Performance benchmarking
cargo bench
Run all tests systematically using the provided Nushell script:
# Make script executable
chmod +x run_comprehensive_tests.nu
# Run all tests
./run_comprehensive_tests.nu
The test suite covers:
For continuous integration, use:
# Quick test suite (no integration tests)
cargo test --all-features
# Full test suite including integration tests
cargo test --all-features -- --ignored --test-threads=1
Note: Integration tests use --test-threads=1 to avoid port conflicts when spawning multiple test nodes.
add_address()WASM support is under active development. The wasm feature exists but requires additional work to fully function.
The following issues prevent successful WASM compilation and need to be resolved:
Problem: The IndexedDBStore and MemoryStore implementations use sled-specific methods (to_ivec() and from_ivec()) that don't exist in the WASM context.
Location:
netabase_store/src/databases/indexeddb_store.rs:204netabase_store/src/databases/memory_store.rs:334, 376, 607Error:
error[E0599]: no method named `to_ivec` found for type parameter `D`
error[E0599]: no function or associated item named `from_ivec` found for type parameter `D`
Resolution Needed:
to_vec() and from_vec() for all backendsIVec usagePrevious Issue: Some features were referenced but not properly defined.
Resolution: Feature gating has been fixed:
sled, redb, and libp2p features are now properly gatedToIVec trait methods are now correctly gated on feature = "sled" instead of feature = "native"RecordStoreExt trait is properly gated on all(feature = "libp2p", not(target_arch = "wasm32"))To complete WASM support, the following tasks must be completed:
sled, redb, and libp2p features (resolved)To test WASM compilation:
# Install WASM target
rustup target add wasm32-unknown-unknown
# Attempt to build for WASM
cargo build --target wasm32-unknown-unknown --no-default-features --features wasm
# Run WASM-specific tests
cargo test --test wasm_compilation
Until WASM support is fully implemented, you can:
native feature for desktop/server applicationsIndexedDB directlynetabase_definition_module macroThis project is licensed under the GPL 3 License.
Contributions welcome! Please ensure: