| Crates.io | ferrompi |
| lib.rs | ferrompi |
| version | 0.1.0 |
| created_at | 2026-01-13 17:14:50.687248+00 |
| updated_at | 2026-01-13 17:14:50.687248+00 |
| description | A thin wrapper for Rust to access the C API of MPICH / OpenMP |
| homepage | |
| repository | https://github.com/rjmalves/ferrompi |
| max_upload_size | |
| id | 2040695 |
| size | 146,342 |
Lightweight Rust bindings for MPI 4.x with persistent collectives support.
FerroMPI provides Rust bindings to MPI through a thin C wrapper layer, enabling access to MPI 4.0+ features like persistent collectives that are not available in other Rust MPI bindings.
| Feature | FerroMPI | rsmpi |
|---|---|---|
| MPI Version | 4.1 | 3.1 |
| Persistent Collectives | โ | โ |
| Large Count (>2ยณยน) | โ | โ |
| API Style | Minimal, focused | Comprehensive |
| C Wrapper | ~700 lines | None (direct bindings) |
FerroMPI is ideal for:
Add to your Cargo.toml:
[dependencies]
ferrompi = "0.1"
Ubuntu/Debian:
sudo apt install mpich libmpich-dev
macOS:
brew install mpich
use ferrompi::{Mpi, ReduceOp};
fn main() -> ferrompi::Result<()> {
let mpi = Mpi::init()?;
let world = mpi.world();
let rank = world.rank();
let size = world.size();
println!("Hello from rank {} of {}", rank, size);
// Sum across all ranks
let sum = world.allreduce_scalar(rank as f64, ReduceOp::Sum)?;
println!("Rank {}: sum = {}", rank, sum);
Ok(())
}
cargo build --release
mpiexec -n 4 ./target/release/my_program
use ferrompi::{Mpi, ReduceOp};
let mpi = Mpi::init()?;
let world = mpi.world();
// Broadcast
let mut data = vec![0.0; 100];
if world.rank() == 0 {
data.fill(42.0);
}
world.broadcast_f64(&mut data, 0)?;
// All-reduce
let send = vec![1.0; 100];
let mut recv = vec![0.0; 100];
world.allreduce_f64(&send, &mut recv, ReduceOp::Sum)?;
// Gather
let my_data = vec![world.rank() as f64];
let mut gathered = vec![0.0; world.size() as usize];
world.gather_f64(&my_data, &mut gathered, 0)?;
use ferrompi::{Mpi, ReduceOp, Request};
let mpi = Mpi::init()?;
let world = mpi.world();
let send = vec![1.0; 1000];
let mut recv = vec![0.0; 1000];
// Start nonblocking operation
let request = world.iallreduce_f64(&send, &mut recv, ReduceOp::Sum)?;
// Do other work while communication proceeds...
expensive_computation();
// Wait for completion
request.wait()?;
// recv now contains the result
use ferrompi::{Mpi, ReduceOp};
let mpi = Mpi::init()?;
let world = mpi.world();
// Buffer used for all iterations
let mut data = vec![0.0f64; 1000];
// Initialize ONCE
let mut persistent = world.bcast_init_f64(&mut data, 0)?;
// Use MANY times - amortizes setup cost!
for iter in 0..10000 {
if world.rank() == 0 {
data.fill(iter as f64);
}
persistent.start()?;
persistent.wait()?;
// data contains broadcast result on all ranks
}
// Cleanup on drop
| Type | Description |
|---|---|
Mpi |
MPI environment handle (init/finalize) |
Communicator |
MPI communicator wrapper |
Request |
Nonblocking operation handle |
PersistentRequest |
Persistent operation handle (MPI 4.0+) |
| Operation | Blocking | Nonblocking | Persistent |
|---|---|---|---|
| Broadcast | broadcast_f64 |
ibroadcast_f64 |
bcast_init_f64 |
| Reduce | reduce_f64 |
- | - |
| Allreduce | allreduce_f64 |
iallreduce_f64 |
allreduce_init_f64 |
| Gather | gather_f64 |
- | - |
| Allgather | allgather_f64 |
- | - |
| Scatter | scatter_f64 |
- | - |
pub enum ReduceOp {
Sum, // MPI_SUM
Max, // MPI_MAX
Min, // MPI_MIN
Prod, // MPI_PROD
}
# Build examples
cargo build --release --examples
# Run hello world
mpiexec -n 4 ./target/release/examples/hello_world
# Run all examples
mpiexec -n 4 ./target/release/examples/allreduce
mpiexec -n 4 ./target/release/examples/nonblocking
mpiexec -n 4 ./target/release/examples/persistent_bcast
mpiexec -n 4 ./target/release/examples/pi_monte_carlo
| Variable | Description | Example |
|---|---|---|
MPI_PKG_CONFIG |
pkg-config name | mpich, ompi |
MPICC |
MPI compiler wrapper | /opt/mpich/bin/mpicc |
CRAY_MPICH_DIR |
Cray MPI installation | /opt/cray/pe/mpich/8.1.25 |
FerroMPI automatically detects MPI installations via:
MPI_PKG_CONFIG environment variablempich, ompi, mpi)mpicc -show outputCRAY_MPICH_DIR (for Cray systems)# Check if MPI is installed
which mpiexec
mpiexec --version
# Set pkg-config name explicitly
export MPI_PKG_CONFIG=mpich
cargo build
Persistent collectives require MPI 4.0+. Check your MPI version:
mpiexec --version
# MPICH Version: 4.2.0 โ
# Open MPI 5.0.0 โ
# MPICH Version: 3.4.2 โ (too old)
export DYLD_LIBRARY_PATH=$(brew --prefix mpich)/lib:$DYLD_LIBRARY_PATH
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Rust Application โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ferrompi (Safe Rust) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ffi.rs (bindings) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ferrompi.c (C layer) โ โ ~700 lines
โโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ MPICH / OpenMPI โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
The C layer provides:
Licensed under:
Contributions welcome! Please ensure:
mpiexec -n 4cargo fmt, cargo clippy)FerroMPI was inspired by: