| Crates.io | linch |
| lib.rs | linch |
| version | 0.6.3 |
| created_at | 2025-09-30 01:30:56.874117+00 |
| updated_at | 2025-11-10 15:06:12.244727+00 |
| description | In development |
| homepage | |
| repository | https://github.com/zach-schoenberger/linch |
| max_upload_size | |
| id | 1860460 |
| size | 6,345,729 |
Linch is a high-performance async/sync channel library for Rust that prioritizes simplicity and efficiency. While performance is important, Linch's primary focus is providing a clean, straightforward implementation that's easy to understand and use, with seamless communication between sync and async contexts.
Etymology: The name "linch" is short for "lined channel" - like a concrete-lined channel that provides a smooth, reliable pathway for water flow. Similarly, Linch provides a smooth, reliable pathway for data flow between different parts of your application.
Add this to your Cargo.toml:
[dependencies]
linch = "0.1"
use linch::bounded;
// Create a bounded channel with capacity 10
let (sender, receiver) = bounded(10);
// Send synchronously
sender.send(42).unwrap();
// Receive synchronously
let value = receiver.recv().unwrap();
assert_eq!(value, 42);
use linch::bounded;
#[tokio::main]
async fn main() {
let (sender, receiver) = bounded(10);
// Send asynchronously
sender.send_async(42).await.unwrap();
// Receive asynchronously
let value = receiver.recv_async().await.unwrap();
assert_eq!(value, 42);
}
Linch provides two channel implementations to suit different use cases:
linch::channel)The primary implementation focuses on simplicity and correctness with efficient async handling:
use linch::bounded;
let (tx, rx) = channel(100);
When to use:
Characteristics:
linch::schannel)A high-throughput implementation optimized for maximum performance:
use linch::schannel;
let (tx, rx) = schannel::with_capacity(100);
When to use:
Characteristics:
One of Linch's key strengths is enabling seamless communication between sync and async contexts:
use linch::bounded;
use std::thread;
#[tokio::main]
async fn main() {
let (tx, rx) = bounded(10);
// Spawn a synchronous thread (e.g., CPU-intensive work)
let tx_clone = tx.clone();
thread::spawn(move || {
for i in 0..5 {
// Synchronous send from thread
tx_clone.send(format!("Processed item {}", i)).unwrap();
// Simulate work
thread::sleep(std::time::Duration::from_millis(100));
}
});
// Receive asynchronously in async context without blocking
for _ in 0..5 {
let value = rx.recv_async().await.unwrap();
println!("Async task received: {}", value);
// Can continue with other async work here
}
}
You can also send from async contexts to sync contexts:
use linch::bounded;
use std::thread;
#[tokio::main]
async fn main() {
let (tx, rx) = bounded(10);
// Spawn a sync thread that receives
let handle = thread::spawn(move || {
while let Ok(value) = rx.recv() {
println!("Sync thread received: {}", value);
}
});
// Send from async context
for i in 0..5 {
tx.send_async(format!("Async message {}", i)).await.unwrap();
// Can do other async work between sends
tokio::time::sleep(tokio::time::Duration::from_millis(50)).await;
}
drop(tx); // Close channel
handle.join().unwrap();
}
use linch::{bounded, Select};
#[tokio::main]
async fn main() {
let (tx1, rx1) = bounded(1);
let (tx2, rx2) = bounded(1);
// Send to both channels
tx1.send("Hello").unwrap();
tx2.send("World").unwrap();
// Select from multiple receivers
let mut sel = Select::new();
let idx1 = sel.recv(&rx1);
let idx2 = sel.recv(&rx2);
let op = sel.select();
match op.index() {
i if i == idx1 => {
let msg = op.recv(&rx1).unwrap();
println!("Received from channel 1: {}", msg);
},
i if i == idx2 => {
let msg = op.recv(&rx2).unwrap();
println!("Received from channel 2: {}", msg);
},
_ => unreachable!(),
}
}
The schannel implementation supports conversion to async streams:
use linch::schannel;
use futures::StreamExt;
#[tokio::main]
async fn main() {
let (tx, rx) = schannel::with_capacity(10);
// Send some values
for i in 0..5 {
tx.send(i).unwrap();
}
drop(tx); // Close the channel
// Convert to stream and collect
let values: Vec<_> = rx.into_stream().collect().await;
println!("Collected: {:?}", values);
}
Linch is designed with performance in mind, but simplicity comes first. The implementations are:
The crate includes comprehensive benchmarks comparing against other channel implementations:
# Run all benchmarks
cargo bench
# Run specific benchmark categories
cargo bench congestion
cargo bench realistic_workload
See the benchmark guide for detailed performance analysis.
Linch prioritizes:
The goal is to provide a channel implementation that's both easy to use and fast enough for most applications, with particular focus on making sync/async interoperability effortless.
bounded(capacity) - Create a bounded channelSender::send(item) - Send synchronouslySender::send_async(item) - Send asynchronouslyReceiver::recv() - Receive synchronouslyReceiver::recv_async() - Receive asynchronouslyschannel::bounded(capacity) - Create a high-throughput channelSender::send_async(item) - Send asynchronously with active pollingReceiver::recv_async() - Receive asynchronously with active pollingReceiver::into_stream() - Convert to async streamBoth the main channel and schannel support select operations:
Main Channel Select:
Select::new() - Create a new select operationSelect::recv(receiver) - Add receiver to selectSelect::select() - Wait for any operation to completeSChannel Select:
schannel::Select::new() - Create a new select operationschannel::Select::recv(receiver) - Add receiver to selectschannel::Select::send(sender) - Add sender to selectschannel::Select::select() - Wait for any operation to completeschannel::Select::try_select() - Non-blocking selectschannel::Select::select_timeout(timeout) - Select with timeoutBoth implementations use standard Rust error types:
SendError<T> - Returned when all receivers are droppedRecvError - Returned when all senders are droppedSendTimeoutError<T> - Returned on send timeoutRecvTimeoutError - Returned on receive timeoutContributions are welcome! Please feel free to submit a Pull Request. The project values:
This project is licensed under the MIT License - see the LICENSE file for details.