rad

Crates.iorad
lib.rsrad
version0.5.0
sourcesrc
created_at2017-06-11 22:11:44.107065
updated_at2017-09-25 21:54:06.244765
descriptionA type-safe, high-level interface to librados using the low-level C bindings from ceph-rust.
homepagehttps://github.com/sdleffler/rad-rs/
repositoryhttps://github.com/sdleffler/rad-rs/
max_upload_size
id18615
size67,260
Shea Leffler (sdleffler)

documentation

https://docs.rs/rad/

README

Build Status Docs Status On crates.io

rad: High-level Rust library for interfacing with RADOS

This library provides a typesafe and extremely high-level Rust interface to RADOS, the Reliable Autonomous Distributed Object Store. It uses the raw C bindings from ceph-rust.

Installation

To build and use this library, a working installation of the Ceph librados development files is required. On systems with apt-get, this can be acquired like so:

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
sudo apt-add-repository 'deb https://download.ceph.com/debian-luminous/ `lsb_release -sc` main'
sudo apt-get update
sudo apt-get install librados-dev

N.B. luminous is the current Ceph release. This library will not work correctly or as expected with earlier releases of Ceph/librados (Jewel or earlier; Kraken is fine.)

For more information on installing Ceph packages, see the Ceph documentation.

Examples

Connecting to a cluster

The following shows how to connect to a RADOS cluster, by providing a path to a ceph.conf file, a path to the client.admin keyring, and requesting to connect with the admin user. This API bares little resemblance to the bare-metal librados API, but it is easy to trace what's happening under the hood: ConnectionBuilder::with_user or ConnectionBuilder::new allocates a new rados_t. read_conf_file calls rados_conf_read_file, conf_set calls rados_conf_set, and connect calls rados_connect.

use rad::ConnectionBuilder;

let cluster = ConnectionBuilder::with_user("admin").unwrap()
    .read_conf_file("/etc/ceph.conf").unwrap()
    .conf_set("keyring", "/etc/ceph.client.admin.keyring").unwrap()
    .connect()?;

The type returned from .connect() is a Cluster handle, which is a wrapper around a rados_t which guarantees a rados_shutdown on the connection when dropped.

Writing a file to a cluster with synchronous I/O

use std::fs::File;
use std::io::Read;

use rad::ConnectionBuilder;

let cluster = ConnectionBuilder::with_user("admin")?
    .read_conf_file("/etc/ceph.conf")?
    .conf_set("keyring", "/etc/ceph.client.admin.keyring")?
    .connect()?;

// Read in bytes from some file to send to the cluster.
let file = File::open("/path/to/file")?;
let mut bytes = Vec::new();
file.read_to_end(&mut bytes)?;

let pool = cluster.get_pool_context("rbd")?;

pool.write_full("object-name", &bytes)?;

// Our file is now in the cluster! We can check for its existence:
assert!(pool.exists("object-name")?);

// And we can also check that it contains the bytes we wrote to it.
let mut bytes_from_cluster = vec![0u8; bytes.len()];
let bytes_read = pool.read("object-name", &mut bytes_from_cluster, 0)?;
assert_eq!(bytes_read, bytes_from_cluster.len());
assert!(bytes_from_cluster == bytes);

Writing multiple objects to a cluster with asynchronous I/O and futures-rs

rad-rs also supports the librados AIO interface, using the futures crate. This example will start NUM_OBJECTS writes concurrently and then wait for them all to finish.

use std::fs::File;
use std::io::Read;

use rand::{Rng, SeedableRng, XorShiftRng};

use rad::ConnectionBuilder;

const NUM_OBJECTS: usize = 8;

let cluster = ConnectionBuilder::with_user("admin")?
    .read_conf_file("/etc/ceph.conf")?
    .conf_set("keyring", "/etc/ceph.client.admin.keyring")?
    .connect()?;

let pool = cluster.get_pool_context("rbd")?;

stream::iter_ok((0..NUM_OBJECTS)
    .map(|i| {
        let bytes = XorShiftRng::from_seed([i as u32 + 1, 2, 3, 4])
            .gen_iter::<u8>()
            .take(1 << 16).collect();

        let name = format!("object-{}", i);

        pool.write_full_async(name, &bytes)
    }))
    .buffer_unordered(NUM_OBJECTS)
    .collect()
    .wait()?;

Running tests

Integration tests against a demo cluster are provided, and the test suite (which is admittedly a little bare at the moment) uses Docker and a container derived from the Ceph ceph/demo container to bring a small Ceph cluster online, locally. A script is provided for launching the test suite:

./tests/run-all-tests.sh

Launching the test suite requires Docker to be installed.

License

This project is licensed under the Mozilla Public License, version 2.0.

Commit count: 54

cargo fmt