| Crates.io | netidx-netproto |
| lib.rs | netidx-netproto |
| version | 0.31.3 |
| created_at | 2021-01-13 20:10:32.958739+00 |
| updated_at | 2025-12-24 23:02:19.207768+00 |
| description | netidx wire protocol |
| homepage | https://netidx.github.io/netidx-book/ |
| repository | https://github.com/estokes/netidx |
| max_upload_size | |
| id | 341531 |
| size | 99,875 |
Follow me on X β’ Join us on Discord
Real-time data sharing for distributed systems, without the message broker.
Netidx is a high-performance Rust middleware that lets you publish and subscribe to live data across your network using a simple, hierarchical namespace. Think of it as a distributed filesystem for streaming data, where values update in real-time and programs can both read and write.
π Built for Performance
π Secure by Default
π― Simple Mental Model
/sensors/temperature or /trading/prices/AAPLπ οΈ Production Ready
Publishing sensor data is just a few lines:
use netidx::{
publisher::{Publisher, Value, BindCfg},
config::Config, path::Path, resolver::Auth,
};
use tokio::time::{self, Duration};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let cfg = Config::load_default()?;
let publisher = PublisherBuilder::new(cfg)
.desired_auth(DesiredAuth::Anonymous)
.bind_cfg(Some("192.168.0.0/16".parse()?))
.build()
.await?;
let temp = publisher.publish(
Path::from("/sensors/lab/temperature"),
get_temperature_reading().await
)?;
loop {
time::sleep(Duration::from_secs(1)).await;
let mut batch = publisher.start_batch();
temp.update(&mut batch, get_temperature_reading().await);
batch.commit(None).await;
}
}
Subscribing is equally straightforward:
use netidx::{
subscriber::{Subscriber, UpdatesFlags},
config::Config, path::Path, resolver::Auth,
};
use futures::{prelude::*, channel::mpsc};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let cfg = Config::load_default()?;
let subscriber = Subscriber::new(cfg, Auth::Anonymous)?;
let temp = subscriber.subscribe(
Path::from("/sensors/lab/temperature")
).await?;
println!("Current temperature: {:?}", temp.last());
let (tx, mut rx) = mpsc::channel(10);
temp.updates(UpdatesFlags::empty(), tx);
while let Some(batch) = rx.next().await {
for (_, value) in batch {
println!("Temperature updated: {:?}", value);
}
}
Ok(())
}
Netidx shines in scenarios where you need:
Try the examples (zero configuration required):
# Clone the repo
git clone https://github.com/estokes/netidx.git
cd netidx
# Run the basic pub/sub example
cargo run --example simple_publisher # Terminal 1
cargo run --example simple_subscriber # Terminal 2
See examples/README.md for 12+ examples covering everything from basic pub/sub to distributed clustering and time-series archiving.
Install the tools:
cargo install netidx-tools
Start a resolver server (like DNS for netidx):
netidx resolver-server --config /path/to/resolver.json
Explore with the browser:
netidx browser
Add to your project:
cargo add netidx
See the netidx book for complete documentation, tutorials, and deployment guides.
Netidx has three components:
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β Subscriber βββββββΆβ Resolver ββββββββ Publisher β
β β β Server β β β
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β β
β Direct TCP Connection β
ββββββββββββββββββββββββββββββββββββββββββββββ
(data flows here)
Unlike traditional message brokers, the resolver only stores addresses. Data flows directly from publishers to subscribers over TCP, eliminating the broker as a bottleneck and single point of failure. The resolver can be replicated and is federated to support fault tolerance, load, and site autonomy.
Great fit:
Maybe not:
netidx-container - NoSQL database (Redis-like)
Need persistence and guaranteed delivery? netidx-container provides Redis-like NoSQL storage that integrates seamlessly with the rest of the netidx ecosystem:
The key difference from Redis or MQTT: it's not a required central component. You can:
This architectural flexibility means you're not forced to choose between performance and reliabilityβyou can have both where you need them.
netidx-archive - Time-series event logging and replay
Need to record and replay historical data? netidx-archive provides high-performance event logging with smart storage management:
Real-world example: Market data logging with 2TB of local flash for frequently accessed periods, automatically going out to S3 when users request historical data.
| Feature | Netidx | MQTT | Redis Pub/Sub | gRPC |
|---|---|---|---|---|
| Broker required | No | Yes | Yes | No |
| Hierarchical namespace | Yes | Yes (topics) | Limited | No |
| Discovery/browsing | Yes | No | No | Via reflection |
| Bi-directional | Yes | No | No | Yes (streaming) |
| Guaranteed delivery | Optional* | Yes | No | No |
| Persistence | Optional* | Depends | Yes | No |
| Authorization | Central | Broker-level | Broker-level | Custom |
| Type system | Rich | Opaque | Limited | Protobuf |
| Primary use case | Live data systems | IoT messaging | Caching + pubsub | RPC |
* Use netidx-container and netidx-archive when you need guaranteed delivery and persistenceβbut unlike MQTT/Redis, it's not required for basic pub/sub.
vs MQTT: No mandatory broker bottleneck, richer types, bi-directional, better discovery. Add persistence only where needed.
vs Redis: Direct connections for low latency, hierarchical organization, stronger authorization. Use netidx-container for Redis-like storage without making it central to your architecture.
vs gRPC: Pub/sub is native (not bolted-on streaming), namespace discovery, connection pooling, optimized for live updates.
MIT - see LICENSE for details.
Contributions welcome! Netidx has been production-tested for years but we're always looking to improve. Check out the issues or jump into Discord to discuss ideas.