arroyo-udf-plugin

Crates.ioarroyo-udf-plugin
lib.rsarroyo-udf-plugin
version0.2.0
sourcesrc
created_at2024-04-20 03:10:18.188681
updated_at2024-07-02 18:30:53.202497
descriptionPlugin interface for Arroyo UDFs
homepagehttps://arroyo.dev
repositoryhttps://github.com/ArroyoSystems/arroyo
max_upload_size
id1214303
size12,533
Micah Wylde (mwylde)

documentation

README

Arroyo

Arroyo Cloud | Getting started | Docs | Discord | Website

Arroyo is dual-licensed under Apache 2 and MIT licenses. PRs welcome! git commit activity CI GitHub release (latest by date)

Arroyo is a distributed stream processing engine written in Rust, designed to efficiently perform stateful computations on streams of data. Unlike traditional batch processing, streaming engines can operate on both bounded and unbounded sources, emitting results as soon as they are available.

In short: Arroyo lets you ask complex questions of high-volume real-time data with subsecond results.

running job

Features

πŸ¦€ SQL and Rust pipelines

πŸš€ Scales up to millions of events per second

πŸͺŸ Stateful operations like windows and joins

πŸ”₯State checkpointing for fault-tolerance and recovery of pipelines

πŸ•’ Timely stream processing via the Dataflow model

Use cases

Some example use cases include:

  • Detecting fraud and security incidents
  • Real-time product and business analytics
  • Real-time ingestion into your data warehouse or data lake
  • Real-time ML feature generation

Why Arroyo

There are already a number of existing streaming engines out there, including Apache Flink, Spark Streaming, and Kafka Streams. Why create a new one?

  • Serverless operations: Arroyo pipelines are designed to run in modern cloud environments, supporting seamless scaling, recovery, and rescheduling
  • High performance SQL: SQL is a first-class concern, with consistently excellent performance
  • Designed for non-experts: Arroyo cleanly separates the pipeline APIs from its internal implementation. You don't need to be a streaming expert to build real-time data pipelines.

Getting Started

You can get started with a single node Arroyo cluster by running the following docker command:

$ docker run -p 8000:8000 ghcr.io/arroyosystems/arroyo-single:latest

or if you have Cargo installed, you can use the arroyo cli:

$ cargo install arroyo
$ arroyo start

Then, load the Web UI at http://localhost:8000.

For a more in-depth guide, see the getting started guide.

Once you have Arroyo running, follow the tutorial to create your first real-time pipeline.

Developing Arroyo

We love contributions from the community! See the developer setup guide to get started, and reach out to the team on discord or create an issue.

Community

Arroyo Cloud

Don't want to self-host? Arroyo Systems provides fully-managed cloud hosting for Arroyo. Sign up here.

Commit count: 599

cargo fmt