Crates.io | efflux |
lib.rs | efflux |
version | 2.0.1 |
source | src |
created_at | 2018-09-07 04:46:22.272892 |
updated_at | 2019-01-15 23:31:46.164785 |
description | Easy MapReduce and Hadoop Streaming interfaces in Rust |
homepage | |
repository | https://github.com/whitfin/efflux |
max_upload_size | |
id | 83372 |
size | 34,806 |
Efflux is a set of Rust interfaces for MapReduce and Hadoop Streaming. It enables Rust developers to run batch jobs on Hadoop infrastructure whilst staying with the efficiency and safety they're used to.
Initially written to scratch a personal itch, this crate offers simple traits to mask the internals of working with Hadoop Streaming which lend themselves well to writing jobs quickly. Functionality is handed off to macros where possible to provide compile time guarantees, and any other functionality is kept simple to avoid overhead wherever possible.
Efflux is available on crates.io as a library crate, so you only need to add it as a dependency:
[dependencies]
efflux = "2.0"
You can then gain access to everything relevant using the prelude
module of Efflux:
use efflux::prelude::*;
Efflux comes with a handy template to help generate new projects, using the kickstart tool. You can simply use the commands below and follow the prompt to generate a new project skeleton:
# install kickstart
$ cargo install kickstart
# create a project from the template
$ kickstart -s examples/template https://github.com/whitfin/efflux
If you'd rather not use the templating tool, you can always work from the examples found in this repository. A good place to start is the traditional wordcount example.
Testing your binaries is actually fairly simple, as you can simulate the Hadoop phases using a basic UNIX pipeline. The following example replicates the Hadoop job flow and generates output that matches a job executed with Hadoop itself:
# example Hadoop task invocation
$ hadoop jar hadoop-streaming-2.8.2.jar \
-input <INPUT> \
-output <OUTPUT> \
-mapper <MAPPER> \
-reducer <REDUCER>
# example simulation run via UNIX utilities
$ cat <INPUT> | <MAPPER> | sort -k1,1 | <REDUCER> > <OUTPUT>
This can be tested using the wordcount example to confirm that the outputs are indeed the same. There may be some cases where output differs, but it should be sufficient for many cases.