iceoryx2-ffi-macros

Crates.ioiceoryx2-ffi-macros
lib.rsiceoryx2-ffi-macros
version0.4.1
sourcesrc
created_at2024-09-28 16:25:39.099971
updated_at2024-09-28 23:34:06.446017
descriptioniceoryx2: [internal] helper proc-macros for ffi
homepagehttps://iceoryx.io
repositoryhttps://github.com/eclipse-iceoryx/iceoryx2
max_upload_size
id1390225
size22,420
Christian Eltzschig (elfenpiff)

documentation

README

CI Cirrus CI Codecov Benchmarks Changelog Crates.io Examples FAQ Gitter License Roadmap

iceoryx2 - Zero-Copy Lock-Free IPC Purely Written In Rust

  1. Introduction
  2. Documentation
  3. Performance
  4. Getting Started
    1. Publish Subscribe
    2. Events
    3. Custom Configuration
  5. Supported Platforms
  6. Language Bindings
  7. Commercial Support
  8. Thanks To All Contributors

Introduction

Welcome to iceoryx2, the efficient, and ultra-low latency inter-process communication middleware. This library is designed to provide you with fast and reliable zero-copy and lock-free inter-process communication mechanisms.

So if you want to communicate efficiently between multiple processes or applications iceoryx2 is for you. With iceoryx2, you can:

  • Send huge amounts of data using a publish/subscribe, request/response (planned), pipeline (planned) or blackboard pattern (planned), making it ideal for scenarios where large datasets need to be shared.
  • Exchange signals through events, enabling quick and reliable signaling between processes.

iceoryx2 is based on a service-oriented architecture (SOA) and facilitates seamless inter-process communication (IPC).

It is all about providing a seamless experience for inter-process communication, featuring versatile messaging patterns. Whether you're diving into publish-subscribe, events, or the promise of upcoming features like request-response, pipelines, and blackboard, iceoryx2 has you covered.

One of the features of iceoryx2 is its consistently low transmission latency regardless of payload size, ensuring a predictable and reliable communication experience.

iceoryx2's origins can be traced back to iceoryx. By overcoming past technical debts and refining the architecture, iceoryx2 enables the modularity we've always desired.

In the near future, iceoryx2 is poised to support at least the same feature set and platforms as iceoryx, ensuring a seamless transition and offering enhanced capabilities for your inter-process communication needs. So, if you're looking for lightning-fast, cross-platform communication that doesn't compromise on performance or modularity, iceoryx2 is your answer.

Documentation

The documentation can be found at:

language documentation link
C https://iceoryx2.readthedocs.io
C++ https://iceoryx2.readthedocs.io
Rust https://docs.rs/iceoryx2/latest/iceoryx2/

Performance

Comparision Of Mechanisms

benchmark of different mechanism

Benchmark-System

  • CPU: Intel i7 13700h
  • OS: Linux 6.10.10-arch1-1 #1 SMP PREEMPT_DYNAMIC
  • Compiler:
    • rustc 1.81.0
    • gcc 14.2.1 20240910

Comparision Of Architectures

benchmark on different systems

Getting Started

Publish Subscribe

This minimal example showcases a publisher sending the number 1234 every second, while a subscriber efficiently receives and prints the data.

publisher.rs

use core::time::Duration;
use iceoryx2::prelude::*;

const CYCLE_TIME: Duration = Duration::from_secs(1);

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let node = NodeBuilder::new().create::<ipc::Service>()?;

    let service = node.service_builder(&"My/Funk/ServiceName".try_into()?)
        .publish_subscribe::<usize>()
        .open_or_create()?;

    let publisher = service.publisher_builder().create()?;

    while let NodeEvent::Tick = node.wait(CYCLE_TIME) {
        let sample = publisher.loan_uninit()?;
        let sample = sample.write_payload(1234);
        sample.send()?;
    }

    Ok(())
}

subscriber.rs

use core::time::Duration;
use iceoryx2::prelude::*;

const CYCLE_TIME: Duration = Duration::from_secs(1);

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let node = NodeBuilder::new().create::<ipc::Service>()?;

    let service = node.service_builder(&"My/Funk/ServiceName".try_into()?)
        .publish_subscribe::<usize>()
        .open_or_create()?;

    let subscriber = service.subscriber_builder().create()?;

    while let NodeEvent::Tick = node.wait(CYCLE_TIME) {
        while let Some(sample) = subscriber.receive()? {
            println!("received: {:?}", *sample);
        }
    }

    Ok(())
}

This example is a simplified version of the publish-subscribe example. You can execute it by opening two terminals and calling:

Terminal 1:

cargo run --example publish_subscribe_publisher

Terminal 2:

cargo run --example publish_subscribe_subscriber

Events

This minimal example showcases how push-notifications can be realized by using services with event messaging pattern between two processes. The listener.rs hereby waits for a notification from the notifier.rs.

notifier.rs

use core::time::Duration;
use iceoryx2::prelude::*;

const CYCLE_TIME: Duration = Duration::from_secs(1);

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let node = NodeBuilder::new().create::<ipc::Service>()?;

    let event = node.service_builder(&"MyEventName".try_into()?)
        .event()
        .open_or_create()?;

    let notifier = event.notifier_builder().create()?;

    let id = EventId::new(12);
    while let NodeEvent::Tick = node.wait(CYCLE_TIME) {
        notifier.notify_with_custom_event_id(id)?;

        println!("Trigger event with id {:?} ...", id);
    }

    Ok(())
}

listener.rs

use core::time::Duration;
use iceoryx2::prelude::*;

const CYCLE_TIME: Duration = Duration::from_secs(1);

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let node = NodeBuilder::new().create::<ipc::Service>()?;

    let event = node.service_builder(&"MyEventName".try_into()?)
        .event()
        .open_or_create()?;

    let listener = event.listener_builder().create()?;

    while let NodeEvent::Tick = node.wait(Duration::ZERO) {
        if let Ok(Some(event_id)) = listener.timed_wait_one(CYCLE_TIME) {
            println!("event was triggered with id: {:?}", event_id);
        }
    }

    Ok(())
}

listener.rs (grabbing all events at once)

use core::time::Duration;
use iceoryx2::prelude::*;

const CYCLE_TIME: Duration = Duration::from_secs(1);

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let node = NodeBuilder::new().create::<ipc::Service>()?;

    let event = node.service_builder(&"MyEventName".try_into()?)
        .event()
        .open_or_create()?;

    let listener = event.listener_builder().create()?;

    while let NodeEvent::Tick = node.wait(Duration::ZERO) {
        listener.timed_wait_all(
            |event_id| {
                println!("event was triggered with id: {:?}", event_id);
            },
            CYCLE_TIME,
        )?;
    }

    Ok(())
}

This example is a simplified version of the event example. You can execute it by opening two terminals and calling:

Terminal 1:

cargo run --example event_notifier

Terminal 2:

cargo run --example event_listener

Custom Configuration

It is possible to configure default quality of service settings, paths and file suffixes in a custom configuration file. For more details visit the configuration directory.

Supported Platforms

The support levels can be adjusted when required.

Operating System State Current Support Level Target Support Level
Android planned - tier 1
FreeBSD done tier 2 tier 1
FreeRTOS planned - tier 2
iOS planned - tier 2
Linux (x86_64) done tier 2 tier 1
Linux (aarch64) done tier 2 tier 1
Linux (32-bit) done tier 2 tier 1
Mac OS done tier 2 tier 2
QNX planned - tier 1
VxWorks planned - tier 1
WatchOS planned - tier 2
Windows done tier 2 tier 2
  • tier 1 - All safety and security features are working.
  • tier 2 - Works with a restricted security and safety feature set.
  • tier 3 - Work in progress. Might compile and run or not.

Language Bindings

Language State
C / C++ beta
C# planned
Go planned
Java planned
Kotlin planned
Lua planned
Python planned
Swift planned
Zig planned

Commercial Support

ekxide IO GmbH
info@ekxide.io
  • commercial extensions and tooling
  • custom feature development
  • training and consulting
  • integration support
  • engineering services around the iceoryx ecosystem

Thanks To All Contributors

Christian »elfenpiff« Eltzschig
Christian »elfenpiff« Eltzschig
Mathias »elBoberido« Kraus
Mathias »elBoberido« Kraus
»orecham«
»orecham«
Commit count: 1503

cargo fmt