| Crates.io | saorsa-gossip-types |
| lib.rs | saorsa-gossip-types |
| version | 0.4.0 |
| created_at | 2025-10-04 21:54:42.439801+00 |
| updated_at | 2026-01-24 23:28:51.281659+00 |
| description | Core types for Saorsa Gossip: TopicId, PeerId, MessageHeader, and wire format |
| homepage | |
| repository | https://github.com/dirvine/saorsa-gossip |
| max_upload_size | |
| id | 1868425 |
| size | 68,154 |
A post-quantum secure gossip overlay network for decentralized peer-to-peer communication. Designed to replace DHT-based discovery with a contact-graph-aware gossip protocol, providing low-latency broadcast, partition tolerance, and quantum-resistant cryptography.
Saorsa Gossip implements a complete gossip overlay with:
Status: โ ๏ธ Alpha (workspace v0.2.1) โ libraries compile and ship with >260 automated tests, but the CLI/coordinator binaries are still experimental and several protocols (e.g. presence MLS export) are not finalized. See DESIGN.md for current limitations.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Saorsa Gossip โ
โ โ
โ โโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โ
โ โPresence โ โ PubSub โ โ CRDT โ โ Groups โ โ
โ โ(Beacons)โ โ(Plumtree)โ โ Sync โ โ (MLS) โ โ
โ โโโโโโฌโโโโโ โโโโโโโฌโโโโโ โโโโโโฌโโโโโ โโโโโโฌโโโโโ โ
โ โ โ โ โ โ
โ โโโโโโดโโโโโโโโโโโโโโดโโโโโโโโโโโโโดโโโโโโโโโโโโโดโโโโโ โ
โ โ Membership Layer โ โ
โ โ (HyParView + SWIM) โ โ
โ โโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Transport Layer (ant-quic) โ โ
โ โ QUIC + PQC TLS 1.3 + NAT Traversal โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
This repository tracks the Saorsa Gossip workspace at v0.2.1. Some crates are published on crates.io already, but others are only available from source while the alpha stabilises.
| Crate | Purpose | Why It's Important |
|---|---|---|
| types | Core types (TopicId, PeerId, MessageHeader, wire formats) | Foundation - Defines all fundamental data structures and message formats used across the entire network. Includes BLAKE3-based message ID generation and CBOR wire serialization. |
| identity | ML-DSA-65 key generation, signing, and verification | Security Core - Provides quantum-resistant digital signatures for all messages. Every peer identity is backed by ML-DSA-65 keypairs, ensuring authenticity in a post-quantum world. |
| transport | QUIC transport with ant-quic, NAT traversal | Network Layer - Handles all peer-to-peer communication with low-latency QUIC streams. Includes hole-punching for NAT traversal and connection migration for mobile nodes. |
| membership | HyParView partial views + SWIM failure detection | Peer Discovery - Maintains partial views of the network (8-12 active peers, 64-128 passive). SWIM detects failures in <5s, HyParView heals partitions through periodic shuffles. Critical for network connectivity. |
| pubsub | Plumtree epidemic broadcast with EAGER/IHAVE/IWANT | Message Dissemination - Efficiently broadcasts messages to all topic subscribers. Uses spanning tree (EAGER) for low latency and lazy links (IHAVE) for redundancy. Achieves <500ms P50 broadcast latency. |
| coordinator | Bootstrap node discovery, address reflection, relay | Network Bootstrap - Enables new peers to join the network. Publishes Coordinator Adverts (ML-DSA signed), provides FOAF (friends-of-friends) discovery, and optional relay services for NAT-restricted peers. |
| rendezvous | k=16 rendezvous sharding for global findability | Global Discovery - Implements 65,536 content-addressed shards (BLAKE3-based) for finding peers without DHTs. Providers publish signed summaries to deterministic shards, enabling discovery through capability queries. |
| groups | MLS group key derivation with BLAKE3 KDF | Group Security - Wraps MLS (RFC 9420) for end-to-end encrypted group messaging. Derives presence beaconing secrets from MLS exporter contexts using BLAKE3 keyed hashing. Essential for private group communication. |
| presence | MLS-derived beacon broadcasting, FOAF queries | Online Detection - Broadcasts encrypted presence beacons (10-15 min TTL) derived from group secrets. Enables "who's online" queries within groups and FOAF discovery (3-4 hop TTL). Privacy-preserving through MLS encryption. |
| crdt-sync | Delta-CRDTs (OR-Set, LWW-Register) with anti-entropy | Local-First Data - Provides conflict-free replicated data types for distributed state. OR-Set tracks membership, LWW-Register for scalar values. Delta-based sync minimizes bandwidth. Anti-entropy every 30s ensures eventual consistency. |
Why these crates matter together: They form a complete decentralized gossip network stack - from quantum-resistant identities and QUIC transport, through membership and broadcast protocols, to group encryption and local-first data sync. No DHT, no central servers, pure peer-to-peer with post-quantum security.
Saorsa Gossip provides two production-ready binaries for testing and deployment:
| Binary | Crate | Purpose |
|---|---|---|
saorsa-gossip-coordinator |
saorsa-coordinator | Bootstrap/coordinator node for network discovery (alpha โ adverts generated but not broadcast on the wire yet) |
saorsa-gossip |
saorsa-gossip | CLI tool for testing network features (alpha โ commands are gradually being implemented) |
These binaries are still under heavy development. Use them for experimentation, not production deployments, until the remaining TODOs tracked in this README/DESIGN are resolved.
Install both binaries from crates.io:
# Install coordinator binary (provides saorsa-gossip-coordinator command)
cargo install saorsa-coordinator
# Install CLI tool (provides saorsa-gossip command)
cargo install saorsa-gossip
Or build from source:
# Clone repository
git clone https://github.com/dirvine/saorsa-gossip.git
cd saorsa-gossip
# Build both binaries
cargo build --release -p saorsa-coordinator -p saorsa-gossip
# Binaries available at:
# - target/release/saorsa-gossip-coordinator
# - target/release/saorsa-gossip
Coordinators provide bootstrap discovery for new peers joining the network:
# Start a coordinator on port 7000 with verbose logging
saorsa-gossip-coordinator \
--verbose \
--bind 0.0.0.0:7000 \
--roles coordinator,reflector,relay \
--publish-interval 60
Options:
--bind <ADDR> - Address to bind to (default: 0.0.0.0:7000)--roles <ROLES> - Comma-separated roles: coordinator, reflector, relay, rendezvous--publish-interval <SECS> - Advert publish interval in seconds (default: 300)--identity-path <PATH> - Path to ML-DSA identity file (default: ~/.saorsa-gossip/coordinator.identity)--verbose - Enable verbose DEBUG loggingRoles Explained:
What the coordinator does:
Example output:
INFO Starting Saorsa Gossip Coordinator
INFO Bind address: 0.0.0.0:7000
INFO Roles: coordinator,reflector,relay
INFO Loaded identity: c6333dcf4207a805989f9743e8b42d8e38ea35b085b2d54e80103f2c9725d41f
INFO Coordinator advert publisher started (interval: 60s)
DEBUG Published coordinator advert (3551 bytes)
The saorsa-gossip CLI exercises all library features:
# Create a new ML-DSA identity
saorsa-gossip identity create --alias Alice
# List all identities in keystore
saorsa-gossip identity list
# Show identity details
saorsa-gossip identity show Alice
# Delete an identity
saorsa-gossip identity delete Alice
Output example:
โ Created identity: Alice
PeerId: e4338043f8a848e62110892ca8321f25fad745a615f9dd30f7515aba93988d7a
Saved to: /Users/you/.saorsa-gossip/keystore
# Join the gossip network via coordinator
saorsa-gossip network join \
--coordinator 127.0.0.1:7000 \
--identity Alice \
--bind 0.0.0.0:0
# Show network status
saorsa-gossip network status
# List known peers
saorsa-gossip network peers
# Subscribe to a topic
saorsa-gossip pubsub subscribe --topic news
# Publish a message
saorsa-gossip pubsub publish --topic news --message "Hello, gossip!"
# List subscriptions
saorsa-gossip pubsub list
# Start broadcasting presence
saorsa-gossip presence start --topic general
# Check who's online
saorsa-gossip presence online --topic general
# Stop broadcasting
saorsa-gossip presence stop --topic general
Run a multi-node test network on your local machine:
Terminal 1 - Start Coordinator:
saorsa-coordinator --verbose --bind 127.0.0.1:7000 --roles coordinator,reflector --publish-interval 10
Terminal 2 - Start Second Coordinator:
saorsa-coordinator --verbose --bind 127.0.0.1:7001 --roles coordinator,relay --publish-interval 15 \
--identity-path ~/.saorsa-gossip/coordinator2.identity
Terminal 3 - Create Test Identities:
# Create 3 test node identities
saorsa-gossip identity create --alias Node1
saorsa-gossip identity create --alias Node2
saorsa-gossip identity create --alias Node3
# Verify they were created
saorsa-gossip identity list
What you'll see:
Test Results from Local Validation:
All binaries use structured logging with the tracing crate:
Log Levels:
INFO - Operational events (startup, identity loading, service status)DEBUG - Detailed activity (advert publications, message counts)Enable verbose logging:
# For coordinator
saorsa-coordinator --verbose ...
# For CLI tool
saorsa-gossip --verbose identity create --alias Test
Log format:
2025-10-05T13:34:34.486139Z INFO Starting Saorsa Gossip Coordinator
2025-10-05T13:34:34.486960Z INFO Loaded identity: c6333dcf...725d41f
2025-10-05T13:34:34.488876Z DEBUG Published coordinator advert (3551 bytes)
Before deploying to production, verify:
Issue: "Address already in use"
--bind 127.0.0.1:PORT with a different PORTIssue: "Failed to read keystore file"
Issue: Coordinator not publishing adverts
--roles includes coordinator--publish-interval is reasonable (>5s)Add to your Cargo.toml:
[dependencies]
saorsa-gossip-types = "0.2.1"
saorsa-gossip-identity = "0.2.1"
saorsa-gossip-transport = "0.2.1"
saorsa-gossip-membership = "0.2.1"
saorsa-gossip-pubsub = "0.2.1"
saorsa-gossip-coordinator = "0.2.1"
saorsa-gossip-rendezvous = "0.2.1"
saorsa-gossip-groups = "0.2.1"
saorsa-gossip-presence = "0.2.1"
saorsa-gossip-crdt-sync = "0.2.1"
NOTE: A few crates are still stabilising; if
cargo addcannot find a version yet, depend on the git repository for now:saorsa-gossip-pubsub = { git = "https://github.com/dirvine/saorsa-gossip", tag = "v0.2.1" }
use std::{net::SocketAddr, sync::Arc};
use bytes::Bytes;
use saorsa_gossip_identity::MlDsaKeyPair;
use saorsa_gossip_membership::{
Membership, HyParViewMembership, DEFAULT_ACTIVE_DEGREE, DEFAULT_PASSIVE_DEGREE,
};
use saorsa_gossip_pubsub::{PubSub, PlumtreePubSub};
use saorsa_gossip_transport::UdpTransportAdapter;
use saorsa_gossip_types::{PeerId, TopicId};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let topic = TopicId::from_entity("demo-room");
let peer_id = PeerId::new([7u8; 32]);
let bind_addr: SocketAddr = "0.0.0.0:0".parse()?;
let transport = Arc::new(UdpTransportAdapter::new(bind_addr, vec![]).await?);
let signing_key = MlDsaKeyPair::generate()?;
// Membership can run without seeds for local experimentation.
let membership = HyParViewMembership::new(
peer_id,
DEFAULT_ACTIVE_DEGREE,
DEFAULT_PASSIVE_DEGREE,
transport.clone(),
);
membership.join(vec![]).await?;
// PubSub requires a signing key and the shared transport.
let pubsub = PlumtreePubSub::new(peer_id, transport.clone(), signing_key);
let mut rx = pubsub.subscribe(topic);
pubsub
.initialize_topic_peers(topic, membership.active_view())
.await;
// Publish and observe the loopback delivery.
pubsub.publish(topic, Bytes::from("Hello, gossip!")).await?;
if let Some((from, data)) = rx.recv().await {
println!("Received from {}: {:?}", from, data);
}
Ok(())
}
HyParView: Partial views for connectivity
SWIM: Failure detection
Beacons: MLS exporter-derived tags, ML-DSA signed (alpha builds still use deterministic placeholders until full MLS exporter integration lands; treat them as non-private)
FOAF Queries: Friends-of-friends discovery
Provided by:
saorsa-pqc v0.3.14+ - PQC primitives including ML-KEM, ML-DSA, SLH-DSA, and ChaCha20-Poly1305saorsa-mls - MLS protocol| Attack | Mitigation |
|---|---|
| Spam/Sybil | Invited joins, capability checks, scoring |
| Eclipse | HyParView shuffles, passive diversity |
| Replay | Per-topic nonces, signature checks, expiry |
| Partition | Plumtree lazy links, anti-entropy |
Real hardware only. As of January 16, 2026 we removed the deterministic simulator, mock transports, and synthetic load crates. Every test now speaks to a real ant-quic endpoint so local verification exercises the exact networking stack that ships to production.
| Category | Description |
|---|---|
| Unit tests | cargo test --all covers membership, pubsub, presence, rendezvous, transport, and coordinator logic using in-process ant-quic nodes. |
| Doctests | API snippets in this repository are compiled and executed with cargo test --doc. |
| Transport benches | examples/throughput_test.rs pushes real QUIC traffic between two processes to capture throughput/latency numbers. |
# Run every unit test (real sockets will bind on 127.0.0.1)
cargo test --all
# Verify public API snippets
cargo test --doc
Use the shipping examples to exercise the QUIC stack end-to-end:
# Terminal 1 โ receive large payloads over QUIC
cargo run --example throughput_test --release -- receiver --bind 127.0.0.1:8000
# Terminal 2 โ stream payloads to the receiver
cargo run --example throughput_test --release -- sender --coordinator 127.0.0.1:8000 --bind 127.0.0.1:9000
These programs do not stub anythingโthey start real ant-quic nodes, perform ML-KEM+ML-DSA handshakes, and transfer actual data across the membership/pubsub/bulk streams. That matches the runtime used in production and in the communitas consumer.
Spin up the coordinator and CLI binaries to validate bootstrap + gossip flows over real sockets:
cargo run --bin coordinator -- --bind 0.0.0.0:9090
cargo run --bin cli -- --coordinator 127.0.0.1:9090 --bind 127.0.0.1:9100
With two CLI instances joining the same coordinator you'll observe membership churn, FOAF lookups, and pub/sub fan-out exactly as they will behave on the public network.
# Build all crates
cargo build --release
# Run tests
cargo test --all
# Run with all features
cargo build --all-features
# Unit tests
cargo test --all
# Integration tests
cargo test --test integration_tests
# Performance benchmarks
cargo bench --bench performance
# Code coverage report
cargo tarpaulin --out Html
# Format code
cargo fmt --all
# Lint with Clippy (zero warnings enforced)
cargo clippy --all-features --all-targets -- -D warnings
# Generate documentation
cargo doc --all-features --no-deps --open
We document significant architectural decisions in ADRs. These explain why we made specific choices:
| ADR | Title | Summary |
|---|---|---|
| ADR-001 | Protocol Layering | HyParView + SWIM + Plumtree: three-layer gossip architecture |
| ADR-002 | Post-Quantum Cryptography | Pure PQC with ML-DSA-65, ML-KEM-768, ChaCha20-Poly1305 |
| ADR-003 | Delta-CRDT Synchronization | OR-Set, LWW-Register with IBLT anti-entropy |
| ADR-004 | Seedless Bootstrap | Coordinator Adverts for infrastructure-free discovery |
| ADR-005 | Rendezvous Shards | 65,536 content-addressed shards as DHT replacement |
| ADR-006 | MLS Group Encryption | RFC 9420 for efficient group key management |
| ADR-007 | FOAF Discovery | Privacy-preserving bounded social graph walks |
| ADR-008 | Stream Multiplexing | 3-stream QUIC design for protocol isolation |
| ADR-009 | Peer Scoring | Multi-metric quality tracking for routing |
| ADR-010 | Deterministic Simulator (retired) | Historical record of the removed simulator effort |
See docs/adr/README.md for the complete index and ADR template.
throughput_test running against ant-quicBootstrapCacheComprehensive benchmarks on localhost (ant-quic 0.10.3 with direct stream acceptance):
| Message Size | Throughput (Mbps) | Throughput (MB/s) | Latency | Notes |
|---|---|---|---|---|
| 1 KB | 281 | 33.5 | <1ms | โ Low-latency messaging |
| 10 KB | 2,759 | 328.9 | <1ms | โ Optimal for small payloads |
| 100 KB | 22,230 | 2,650 | <10ms | ๐ Excellent throughput |
| 1 MB | 79,875 | 9,522 | ~10ms | ๐๐ Outstanding performance |
| 10 MB | 1,471 | 175.4 | ~57ms | โ Sustained bulk transfer |
| 50 MB | 1,392 | 166.0 | ~300ms | โ Large file transfer |
| 100 MB | 1,400+ | 167+ | ~600ms | โ Consistent large transfers |
Test Environment:
Key Achievements:
Technical Implementation:
nat_endpoint.list_connections()| Metric | Target | Status |
|---|---|---|
| Broadcast P50 latency | < 500ms | ๐ Testing |
| Broadcast P95 latency | < 2s | ๐ Testing |
| Failure detection | < 5s | ๐ Testing |
| Memory per node | < 50MB | ๐ Testing |
| Messages/sec/node | > 100 | โ Achieved (>2000 small msgs/sec) |
| Transport latency | < 10ms | โ Achieved (4ms connection, <1ms for 1KB) |
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
High Priority (blocking):
Medium Priority (important):
Low Priority (enhancement):
Licensed under either of:
at your option.
Built on top of:
ant-quic - QUIC transport with NAT traversalsaorsa-pqc - Post-quantum cryptographysaorsa-mls - MLS group messagingInspired by:
โ Status (Jan 24 2026): v0.3.0 ships a production-ready QUIC + PQC gossip stack with deployable coordinator/CLI binaries and no simulators or mock transports. All tests operate over real sockets and ML-KEM/ML-DSA handshakes. The transport layer has been simplified to use ant-quic's native infrastructure directly, removing ~4,000 lines of redundant multiplexer code.
Next Steps: tighten ops tooling (metrics + alerting around real transports), finalize IBLT reconciliation + peer scoring, and extend the runtime glue used by Communitas Sites.
See DESIGN.md for the complete technical specification and implementation roadmap.