| Crates.io | ferrium |
| lib.rs | ferrium |
| version | 0.1.0-beta1 |
| created_at | 2025-07-25 02:14:34.294123+00 |
| updated_at | 2025-07-25 02:14:34.294123+00 |
| description | A distributed KV storage system built with openraft |
| homepage | |
| repository | |
| max_upload_size | |
| id | 1767112 |
| size | 791,676 |
Enterprise Distributed KV Storage with Raft Consensus
Features • Quick Start • Configuration • API • Deploy • Architecture
Ferrium is a production-ready distributed key-value storage system built in Rust using the openraft consensus library. It provides strong consistency guarantees similar to etcd or Consul, with comprehensive configuration management and dual-protocol support (HTTP + gRPC).
git clone https://github.com/your-org/ferrium
cd ferrium
cargo build --release
# Generate a default configuration file
./target/release/ferrium-server --generate-config ferrium.toml
# Validate your configuration
./target/release/ferrium-server --config ferrium.toml --validate-config
# Use the single-node example configuration
./target/release/ferrium-server --config examples/configs/single-node.toml
# Node 1 (Primary)
./target/release/ferrium-server --config examples/configs/cluster-node1.toml
# Node 2 & 3 (update configs with appropriate IDs and addresses)
./target/release/ferrium-server --config node2.toml --id 2 --http-addr 10.0.1.11:8001
Ferrium features a comprehensive configuration system supporting every aspect of the distributed system.
# List available configuration locations
./target/release/ferrium-server --list-config-paths
# Generate default configuration
./target/release/ferrium-server --generate-config my-config.toml
# Validate configuration before deployment
./target/release/ferrium-server --config my-config.toml --validate-config
# Run with configuration and CLI overrides
./target/release/ferrium-server --config my-config.toml --log-level debug --id 42
The configuration covers all operational aspects:
examples/configs/single-node.toml - Development & testingexamples/configs/cluster-node1.toml - Production cluster setupexamples/configs/high-performance.toml - Optimized for throughput📖 See CONFIG.md for comprehensive configuration documentation
Ferrium provides both HTTP and gRPC APIs for maximum flexibility.
# Write a key-value pair
curl -X POST -H "Content-Type: application/json" \
-d '{"Set":{"key":"mykey","value":"myvalue"}}' \
http://127.0.0.1:8001/write
# Read a value
curl -X POST -H "Content-Type: application/json" \
-d '{"key":"mykey"}' \
http://127.0.0.1:8001/read
# Delete a key
curl -X POST -H "Content-Type: application/json" \
-d '{"Delete":{"key":"mykey"}}' \
http://127.0.0.1:8001/write
# Health check
curl http://127.0.0.1:8001/health
# Cluster metrics
curl http://127.0.0.1:8001/metrics
# Initialize cluster
curl -X POST http://127.0.0.1:8001/init
# Check leader status
curl http://127.0.0.1:8001/is-leader
curl http://127.0.0.1:8001/leader
# Add learner node
curl -X POST -H "Content-Type: application/json" \
-d '{"node_id":2,"rpc_addr":"127.0.0.1:8002","api_addr":"127.0.0.1:8002"}' \
http://127.0.0.1:8001/add-learner
# Change cluster membership
curl -X POST -H "Content-Type: application/json" \
-d '[1,2,3]' \
http://127.0.0.1:8001/change-membership
The gRPC API provides the same functionality with better performance for service-to-service communication:
# Test gRPC API
./target/release/grpc-client-test
Services Available:
# Quick start with sensible defaults
./target/release/ferrium-server --config examples/configs/single-node.toml
FROM debian:bullseye-slim
COPY target/release/ferrium-server /usr/local/bin/
COPY production.toml /etc/ferrium/config.toml
EXPOSE 8001 9001
CMD ["ferrium-server", "--config", "/etc/ferrium/config.toml"]
apiVersion: v1
kind: ConfigMap
metadata:
name: ferrium-config
data:
config.toml: |
[node]
id = 1
http_addr = "0.0.0.0:8001"
grpc_addr = "0.0.0.0:9001"
data_dir = "/data"
[logging]
level = "info"
format = "json"
structured = true
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ferrium
spec:
serviceName: ferrium
replicas: 3
selector:
matchLabels:
app: ferrium
template:
metadata:
labels:
app: ferrium
spec:
containers:
- name: ferrium
image: ferrium:latest
command: ["ferrium-server"]
args: ["--config", "/etc/ferrium/config.toml"]
ports:
- containerPort: 8001
name: http
- containerPort: 9001
name: grpc
volumeMounts:
- name: config
mountPath: /etc/ferrium
- name: data
mountPath: /data
volumes:
- name: config
configMap:
name: ferrium-config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
[Unit]
Description=Ferrium Distributed KV Store
After=network.target
[Service]
Type=simple
User=ferrium
Group=ferrium
ExecStart=/usr/local/bin/ferrium-server --config /etc/ferrium/config.toml
Restart=always
RestartSec=10
KillSignal=SIGTERM
TimeoutStopSec=30
# Security
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/ferrium /var/log/ferrium
[Install]
WantedBy=multi-user.target
┌─────────────────┐ ┌─────────────────┐
│ HTTP REST │ │ gRPC │
│ API │ │ API │
└─────────────────┘ └─────────────────┘
│ │
└───────────┬───────────┘
│
┌─────────────────────────────┐
│ Management Layer │
│ (Cluster & Operations) │
└─────────────────────────────┘
│
┌─────────────────────────────┐
│ Raft Engine │
│ (openraft-based) │
└─────────────────────────────┘
│
┌─────────────────────────────┐
│ Storage Engine │
│ (RocksDB-based) │
└─────────────────────────────┘
Ferrium addresses common openraft challenges:
TypeConfig implementationHigh Throughput Configuration:
[storage]
sync_writes = false
write_buffer_size = 512
block_cache_size = 1024
[raft]
max_append_entries = 1000
max_inflight_requests = 50
High Durability Configuration:
[storage]
sync_writes = true
enable_wal = true
[raft]
snapshot_policy.enable_auto_snapshot = true
Low Latency Configuration:
[raft]
heartbeat_interval = 100
election_timeout_min = 150
[network]
request_timeout = 5000
connect_timeout = 1000
# Get comprehensive metrics
curl http://127.0.0.1:8001/metrics | jq
# Monitor cluster health
watch -n 1 'curl -s http://127.0.0.1:8001/health | jq'
# Check leader status across nodes
for port in 8001 8002 8003; do
echo "Node $port: $(curl -s http://127.0.0.1:$port/is-leader)"
done
# Unit tests
cargo test
# Integration tests
cargo test --test integration
# End-to-end cluster tests
./scripts/test-cluster.sh
# Start 3-node cluster for testing
./scripts/dev-cluster.sh start
# Run tests against cluster
./scripts/dev-cluster.sh test
# Stop cluster
./scripts/dev-cluster.sh stop
src/
├── bin/
│ ├── main.rs # Server binary with config system
│ ├── grpc_test.rs # gRPC API test client
│ └── grpc_client_test.rs # gRPC integration tests
├── lib.rs # Library root
├── config/ # Configuration system
│ └── mod.rs # TOML config, validation, CLI
├── storage/ # Storage layer
│ └── mod.rs # RocksDB integration
├── network/ # Network & API layer
│ ├── mod.rs # HTTP network + management API
│ └── client.rs # HTTP client library
└── grpc/ # gRPC implementation
├── mod.rs # Proto definitions
└── services/ # gRPC service implementations
examples/
├── configs/ # Example configurations
│ ├── single-node.toml # Development setup
│ ├── cluster-node1.toml # Production cluster
│ └── high-performance.toml # Performance-optimized
proto/ # Protocol buffer definitions
├── kv.proto # KV service definitions
├── management.proto # Management service
└── raft.proto # Raft internal protocols
"No leader found" errors:
# Check cluster status
curl http://127.0.0.1:8001/leader
curl http://127.0.0.1:8001/metrics
# Verify node connectivity
curl http://127.0.0.1:8002/health
curl http://127.0.0.1:8003/health
Configuration errors:
# Validate your configuration
ferrium-server --config my-config.toml --validate-config
# Check configuration locations
ferrium-server --list-config-paths
Performance issues:
# Use high-performance configuration
ferrium-server --config examples/configs/high-performance.toml
# Monitor metrics for bottlenecks
watch -n 1 'curl -s http://127.0.0.1:8001/metrics | jq .current_leader'
# Enable comprehensive debugging
RUST_LOG=ferrium=debug,openraft=debug ferrium-server --config debug.toml
# JSON structured logging for analysis
ferrium-server --config production.toml --log-level debug --format json
Advanced Features
Operations & Monitoring
Developer Experience
We welcome contributions! Please see our contributing guidelines:
git checkout -b feature/amazing-feature)cargo test)# Clone and setup
git clone https://github.com/your-org/ferrium
cd ferrium
# Install development dependencies
cargo install cargo-watch
cargo install cargo-nextest
# Run tests in watch mode
cargo watch -x "nextest run"
The Ferrium logo is available in multiple formats for different use cases:
docs/assets/logo.svg - Main logo for light backgrounds (README, documentation)docs/assets/logo-dark.svg - Optimized version for dark themes and backgroundsdocs/assets/logo-icon.svg - Compact icon version for favicons and small displays<!-- For websites and documentation -->
<img src="docs/assets/logo.svg" alt="Ferrium" width="400"/>
<!-- For dark themes -->
<img src="docs/assets/logo-dark.svg" alt="Ferrium" width="400"/>
<!-- For favicons (convert to PNG/ICO as needed) -->
<img src="docs/assets/logo-icon.svg" alt="Ferrium" width="32"/>
The logo combines the iron/metal theme (Ferrium = iron) with distributed systems concepts, featuring connected nodes and modern gradients that represent the robust, interconnected nature of the system. The subtitle uses high-contrast colors to ensure readability against the metal accent elements.
This project is licensed under the MIT OR Apache-2.0 license.
📖 For detailed configuration options, see CONFIG.md
🚀 Ready to build distributed systems? Start with ferrium-server --generate-config ferrium.toml