| Crates.io | ipfrs |
| lib.rs | ipfrs |
| version | 0.1.0 |
| created_at | 2025-12-06 04:32:09.84122+00 |
| updated_at | 2026-01-18 21:18:29.552446+00 |
| description | Next-generation distributed file system with content-addressing, semantic search, and logic programming |
| homepage | https://github.com/cool-japan/ipfrs |
| repository | https://github.com/cool-japan/ipfrs |
| max_upload_size | |
| id | 1969616 |
| size | 601,239 |
Main library crate for IPFRS (Inter-Planet File RUST System).
ipfrs is the unified entry point for the IPFRS ecosystem, providing:
Single crate that brings together all IPFRS components:
Simple, ergonomic interface:
use ipfrs::Node;
// Start a node
let node = Node::new(config).await?;
// Add content
let cid = node.add_file("path/to/file").await?;
// Retrieve content
let data = node.get(cid).await?;
// Semantic search
let results = node.search_similar("neural networks", 10).await?;
// TensorLogic inference
let solutions = node.infer("knows(alice, ?X)").await?;
Use IPFRS as a library in your application:
Extensible architecture:
ipfrs (Main Library)
├── Node # Unified node orchestrator
├── Builder # Configuration builder
├── Events # Event system
└── Plugins # Plugin registry
↓
All ipfrs-* crates
use ipfrs::{Node, Config};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize with defaults
let node = Node::builder()
.with_storage("sled")
.with_network_mode("public")
.build()
.await?;
// Add a file
let cid = node.add_bytes(b"Hello, IPFRS!").await?;
println!("Added with CID: {}", cid);
// Retrieve the file
let data = node.get_bytes(&cid).await?;
println!("Retrieved: {}", String::from_utf8(data)?);
Ok(())
}
use ipfrs::{Node, Config, StorageBackend, NetworkMode};
let config = Config::builder()
.storage(StorageBackend::ParityDb)
.network_mode(NetworkMode::Public)
.cache_size_mb(2048)
.max_connections(1000)
.enable_tensorlogic()
.enable_semantic_search()
.build()?;
let node = Node::new(config).await?;
use ipfrs::{Node, Event};
let mut node = Node::new(config).await?;
// Subscribe to events
let mut events = node.subscribe();
tokio::spawn(async move {
while let Some(event) = events.recv().await {
match event {
Event::BlockAdded(cid) => println!("Added: {}", cid),
Event::PeerConnected(peer) => println!("Peer: {}", peer),
Event::InferenceComplete(result) => println!("Result: {:?}", result),
_ => {}
}
}
});
use ipfrs::{Node, Plugin};
struct MyPlugin;
impl Plugin for MyPlugin {
fn on_block_add(&self, cid: &Cid, block: &Block) {
// Custom logic on block addition
}
}
let node = Node::builder()
.add_plugin(Box::new(MyPlugin))
.build()
.await?;
Control which components to include:
[dependencies]
ipfrs = { version = "0.3.0", features = ["full"] }
# Or selectively enable features:
ipfrs = {
version = "0.3.0",
features = ["storage", "network", "tensorlogic"]
}
Available features:
full - All features enabledstorage - Storage layernetwork - P2P networkingtransport - Data exchange protocolssemantic - Vector searchtensorlogic - TensorLogic integrationinterface - HTTP/gRPC APIscli - Command-line interface| Metric | Kubo (Go) | IPFRS (Rust) |
|---|---|---|
| Memory (Idle) | 200 MB | 20 MB |
| Memory (Active) | 800 MB | 150 MB |
| Startup Time | 5s | 0.5s |
| Block Add (1MB) | 50ms | 5ms |
| Block Get (1MB) | 30ms | 3ms |
ipfrs-core - Core primitivesipfrs-storage - Storage layeripfrs-network - Networkingipfrs-transport - Data exchangeipfrs-semantic - Vector searchipfrs-tensorlogic - TensorLogicipfrs-interface - APIstokio - Async runtime