| Crates.io | ipc-channel-mux |
| lib.rs | ipc-channel-mux |
| version | 0.0.4 |
| created_at | 2025-11-05 11:51:22.169693+00 |
| updated_at | 2026-01-05 17:55:09.594828+00 |
| description | IPC channel multiplexer |
| homepage | |
| repository | https://github.com/glyn/ipc-channel-mux |
| max_upload_size | |
| id | 1917895 |
| size | 197,576 |
ipc-channel-mux1 is a multiplexing, inter-process implementation of Rust channels (which were inspired by CSP2).
A Rust channel is a unidirectional, FIFO queue of messages which can be used to send messages between threads in a single operating system process. For an excellent introduction to Rust channels, see Using Message Passing to Transfer Data Between Threads in the Rust reference.
ipc-channel-mux extends Rust channels to support inter-process communication (IPC) in a single operating system instance.
ipc-channel-mux multiplexes subchannels over IPC primitives to reduce the consumption of such primitives.
The serde library is used to serialize and deserialize messages sent over ipc-channel-mux.
As much as possible, ipc-channel-mux has been designed to be a drop-in replacement for Rust channels. The mapping from the Rust channel APIs to subchannel APIs is as follows:
channel() → mux::Channel::new().unwrap().sub_channel();Sender<T> → mux::SubSender<T> (requires T: Serialize)Receiver<T> → mux::SubReceiver<T> (requires T: Deserialize)Note that SubSender<T> implements Serialize and Deserialize, so you can send subsenders over subchannels freely, just as you can with Rust channels.
However, you cannot send or receive subreceivers - the reason is explained below.
The easiest way to make your types implement Serialize and Deserialize is to use the serde_macros crate from crates.io as a plugin and then annotate the types you want to send with #[derive(Deserialize, Serialize]). In many cases, that's all you need to do — the compiler generates all the tedious boilerplate code needed to serialize and deserialize instances of your types.
ipc-channel-mux provides a one-shot server to help establish a subchannel between two processes. When a one-shot server is created, a server name is generated and returned along with the server.
The client process calls connect() passing the server name and this returns the sender end of an subchannel from
the client to the server. Note that there is a restriction: connect() may be called at most once per one-shot server.
The server process calls accept() on the server to accept a connect request from a client. accept() blocks until a client has connected to the server and sent a message. It then returns a pair consisting of the receiver end of the subchannel from client to server and the first message received from the client.
So, in order to bootstrap a subchannel between processes, you create an instance of the SubOneShotServer type, pass the resultant server name into the client process (perhaps via an environment variable or command line flag), and connect to the server in the client. See spawn_sub_one_shot_server_client() in multiplex_integration_test.rs for an example of how to do this using a command to spawn the client process.
Let's look at the two ways of creating a channel: directly constructing a channel and using a one-shot server.
Creating a subchannel requires a multiplexing IPC channel to be created first:
let channel = mux::Channel::new().unwrap();
...
let (tx, rx) = channel.sub_channel();
Multiplexing one-shot servers are used like this:
let (server, server_name) = mux::SubOneShotServer::new().unwrap();
...
let tx = mux::SubSender::connect(server_name).unwrap(); // Typically in another process
let (rx, data) = server.accept().unwrap();
An advantage of creating a subchannel, rather than an IPC channel, using a one-shot server is that the subchannel can then be used to transmit subsenders.3
The router routes messages from subreceivers to Crossbeam channels. This allows receiving code to utilise Crossbeam features.
The router is in the mux::subchannel_router module.
send() never blocks.IPC channels are provided by Servo's ipc-channel crate which the implementation of ipc-channel-mux uses for IPC communication.
Readers familiar with ipc-channel may be experiencing some déjà vu at this point since ipc-channel-mux is built on top of ipc-channel and has a similar API.
The main difference is that ipc-channel-mux multiplexes subchannels over the IPC channels provided by ipc-channel.
We'll now explore when it's worth using ipc-channel-mux instead of ipc-channel.
First, it's important to note some other differences between the two kinds of channel:
To replace an IPC channel with a subchannel and get some benefit, it is necessary to either:
Using a one-shot server to create a subchannel means that only that one subchannel can be multiplexed over the underlying IPC channel. So, to replace an IPC one-shot server with a multiplexed one-shot server and get some benefit, it is necessary to either:
connect()) and the receiving process (the one which called accept()), oripc-channel-mux is packaged in its own repository and crate, separate from ipc-channel.
This has the following advantages:
ipc-channel-mux are kept separate from those of IPC channel.ipc-channel-mux using the public API of IPC channel makes the projects easier to understand than if they were combined.ipc-channel-mux and keep enhancing it and experimenting with applying it to other Servo usecases without giving it the (possibly misleading) status of being part of the IPC channel API. In particular, the multiplexing API can be changed as necessary without impacting backwards compatibility of IPC channel.One possible disadvantage is that ipc-channel-mux cannot use IPC channel internals, which would have been possible if they were in the same repository.
Another disadvantage is that Servo will require an additional dependency.
However, it would be feasible to merge ipc-channel-mux into the IPC channel repository later.
To run the tests, issue:
cargo test
Linux is the reference platform for ipc-channel-mux, meaning that bugs encountered on other platforms should be reproduced on Linux so that a complete regression test is available on Linux.
ipc-channel-mux uses the log crate to produce log messages when logging is enabled for one or more processes.
You can emit these log messages from an executable by setting the environment variable RUST_LOG to debug or, for more detail, trace. For example:
RUST_LOG=debug someexecutable
If you want to see the log messages from a test, pass the --nocapture flag to the test executable, e.g.
RUST_LOG=trace cargo test mux_test::multiplex_simple -- --nocapture
Note: RUST_LOG is not automatically propagated between processes, so you have to ensure this is done if you want to enable logging for launched processes.
For more information, see Configure Logging in The Rust Cookbook.
ipc-channel-mux multiplexes its subchannels over IPC channels provided by ipc-channel which is implemented in terms of native IPC primitives: file descriptor passing over Unix sockets on Unix variants, Mach ports on macOS, and named pipes on Windows.
Multiplexed one-shot servers are implemented using IPC channel one-shot servers. One-shot server names are implemented as a file system path (for Unix variants, with the file system path bound to the socket) or other kinds of generated names on macOS and Windows.
The following sections describe the principles of multiplexing subchannels over IPC channels and some of the design considerations.
Each subchannel needs a separate identifier. This is used to tag messages for that subchannel before they are sent to the IPC channel underlying the subchannel. On message receipt, the subchannel id. is used to route the message to the appropriate subchannel.
Generally, sends are non-blocking (but see below) so the main blocking consideration is for receives. A receive on a subchannel may have to receive from the underlying IPC channel, unless the message has already been received (and placed on a standard Rust channel corresponding to the subchannel receiver).
On subchannel receive, we first of all issue a non-blocking receive (try_recv) on the corresponding standard channel. If this returns a message, we can return the message as the result of subchannel receive.
If the corresponding standard channel is empty, we can safely issue a blocking receive on the IPC channel underlying the multi-receiver. (This wouldn't be true if the code supported multi-threading.)
Once a message is received, we can re-try the non-blocking receive on the standard channel to see if a message has been received for the subreceiver. If not, we can block again on the IPC channel.
In the last section, we mentioned issuing a blocking receive on the IPC channel underlying a multi-receiver. It's actually a little more complicated than that because we need to poll for in-flight subsenders having been destroyed. We do this by probing the IPC channel used to transmit the subsender, by sending a small message on the IPC channel.
Polling is implemented by issuing a try_recv_timeout on the IPC channel, with a timeout of one second. When the timeout occurs, polling can be initiated and we can then drop the sender half of the standard channel for a subreceiver whose "other half" (meaning the senders for all clients) has hung up. This will cause the non-blocking receive on such standard channels to return with an error and we can then return Disconnected from the corresponding subchannel receives.
The receive on the multi-receiver's IPC channel also serves the purpose of detecting Disconnect messages generated when a subsender and all its clones on a particular client (approximately equivalent to an IPC sender) have been dropped. That's another way that the sending side of a subchannel can "hang up", after which a receive from the subchannel should fail with Disconnected.
It turns out that a send to an IPC channel can block when the buffer fills up.
So we have to be careful to take every opportunity to receive messages from IPC channels when we can, for example before generating Disconnect messages when a subsender and all its clones on a particular client have been dropped.
Failure to do this can result in deadlocks. For example, if a process creates a large number of subchannels and then drops them, messages are sent to notify the "other side" that one side has hung up. If these messages are not received, drop of a subsender or subreceiver can block.
This risk of deadlock was present for non-multiplexed IPC channels, but the risk was lower because fewer messages were sent on each IPC channel. With multiplexing, a potentially large number of messages can be sent. Fortunately, a multireceiver will tend to drain messages when receiving on behalf of a subreceiver. Providing that the application code issues receives fairly frequently, the underlying IPC channels shouldn't fill up.
ipc-channel-mux is implemented on top of.The term mux is an abbreviation for multiplexer. ↩
Tony Hoare conceived Communicating Sequential Processes (CSP) as a concurrent programming language. Stephen Brookes and A.W. Roscoe developed a sound mathematical basis for CSP as a process algebra. CSP can now be used to reason about concurrency and to verify concurrency properties using model checkers such as FDR4. Go channels were also inspired by CSP. ↩
ipc-channel-mux and ipc-channel do not currently interoperate: an IPC channel cannot be used to transmit a subsender and a subchannel cannot be used to transmit an IPC sender or receiver. ↩
Since subreceivers cannot be transmitted between processes, we expect a subsender created using a mux::Channel instance to be moved or transmitted to another process. ↩
Creating a subchannel could exhaust the memory of a process, but memory allocation is treated as infallible in Rust as Handling memory exhaustion – State of the art? explores. Essentially, if memory allocation fails, the program will panic or, more likely (at least on Linux), be killed by the Out of Memory killer. ↩
On Unix variants, each time an IPC sender is received from an IPC channel, a file descriptor is consumed, even when the same IPC sender is received multiple times. The file descriptor is reclaimed when the received IPC sender is dropped, so file descriptor exhaustion occurs when too many received IPC senders are retained. ↩ ↩2 ↩3
An alternative would be to have the relevant Servo branch use a git dependency on ipc-channel-mux. ↩
cargo test of ipc-channel-mux currently takes just over 2 seconds whereas it used to take over 8 seconds before the multiplexing code was split out of the ipc-channel repo. ↩