Crates.io | vampirc-io |
lib.rs | vampirc-io |
version | 0.3.0 |
source | src |
created_at | 2019-09-17 23:01:33.857734 |
updated_at | 2020-04-12 18:02:43.45248 |
description | A library for asynchronous, non-blocking, UCI protocol–based communication between chess UIs and chess engines. |
homepage | https://vampirc.kejzar.si |
repository | https://github.com/vampirc/vampirc-io.git |
max_upload_size | |
id | 165560 |
size | 22,553 |
vampirc-io is a companion crate to vampirc-uci. While vampirc-uci handles parsing and serialization of UCI messages, vampirc-io handles communication of said messages between the chess client and the chess engine – usually over standard input and standard output.
It does so using asynchronous facilities of the Rust language, most notably the async-std. The loop that reads data from stdin and parses them into a stream of UciMessages, and the loop that writes messages to stdout run in an asynchronous, non-blocking way.
Info: Since 0.3.0, this crate no longer requires the nightly Rust build, but it does require 1.39+, for async support.
The UCI protocol is a way for a chess engine to communicate with a chessboard GUI, such as Scid vs. PC.
The Vampirc Project is a chess engine and chess library suite, written in Rust. It is named for the Slovenian grandmaster Vasja Pirc, and, I guess, vampires? I dunno.
To use the crate, declare a dependency on it in your Cargo.toml file:
Declare dependency in your Cargo.toml
:
[dependencies]
vampirc-io = "0.3"
Then reference the vampirc_io
crate in your crate root:
extern crate vampirc_io;
Imports:
use vampirc_io as vio;
use vampirc_uci::UciMessage;
Create an inbound futures channel - a channel for incoming UCI messages (those coming in from stdin):
let (itx, irx) = vio::new_try_channel();
Create an outbound futures channel - a channel for outgoing UCI messages (those output to stdout):
let (otx, orx) = vio::new_channel();
Write an async function that handles the incoming messages, something like:
async fn process_message(engine: Arc<Engine>, mut msg_stream: Pin<Box<impl Stream<Item = io::Result<UciMessage>>>>, msg_handler: &dyn MsgHandler, msg_sender: &vio::UciSender) {
while let Some(msg_r) = msg_stream.next().await {
if let Ok(msg) = msg_r {
log_message(&msg);
msg_handler.handle_msg(engine.as_ref(), &msg, msg_sender);
} else {
log_error(msg_r.err().unwrap());
}
}
}
The msg_stream
parameter is your stream of incoming messages - the receiving end of the inbound channel, or the irx
variable from the inbound channel
declaration. msg_sender
is the sending end of the outbound channel where you will send your UCI messages, or otx
from the outbound channel declaration.
And, lastly, and most importantly, run stdin/stdout reading/writing loops (the vio::run_std_loops
asynchronous function) and your process_message handler asynchronously
in your main
function, using the join!
macro:
vio::run_future(async {
vio::join!(vio::run_std_loops(itx, orx), process_cmd(engine2, msg_stream, &msg_handler, &otx));
});
The backing infrastructure is more generic and allows for passing of messages using other Streams and Sinks (for example, a TCP socket), not just stdin and stdout. However, due to all this async stuff being new and at the moment super unstable in Rust, this crate is not currently ready or stable enough to expose the underlying API. Feel free to browse the source, though.
async-std
to 1.5 and futures
to 0.3.vampirc-uci
to 0.9.