Crates.io | sarchive |
lib.rs | sarchive |
version | 0.14.0 |
source | src |
created_at | 2019-06-02 22:20:10.813307 |
updated_at | 2024-12-03 22:11:11.897547 |
description | Archival tool for slurm job scripts |
homepage | https://github.com/itkovian/sarchive |
repository | https://github.com/itkovian/sarchive |
max_upload_size | |
id | 138585 |
size | 190,109 |
Archival tool for scheduler job scripts and accompanying files.
Note that the master branch here may be running ahead of the latest release on crates.io. During development, we sometimes rely on dependencies that have not yet released a version with the features we use.
rustc
1.70.0
CI tests run against the following Rust versions:
If you do not have Rust, please see Rustup for installation instructions.
sarchive
requires that the path to the scheduler's main spool directory is
specified. It also requires a cluster
(name) to be set.
sarchive
supports multiple schedulers, the one to be used must be specified
on the command line. Right now, there is support for Slurm
and Torque.
For Slurm, the directory to watch is defined as the StateSaveLocation
in the slurm config.
Furthermore, sarchive
offers various backends. The basic file
backend
writes a copy of the job scripts and associated files to a directory on a
mounted filesystem. We also have limited support for sending job information
to Elasticsearch or produce to a
Kafka topic. We briefly discuss these backends
below.
Activated using the file
subcommand. Note that we do not support using
multiple subcommands (i.e., backends) at this moment.
For file archival, sarchive
requires the path to the archive's top
directory, i.e., where you want to store the backup scripts and accompanying
files.
The archive can be further divided into subdirectories per
--period=yearly
--period=monthly
--period=daily
Each of these directories are also created upon file archival if they do
not exist. This allows for easily tarring old(er) directories you still
wish to keep around, but probably no longer immediately need for user support.For example,
sarchive --cluster huppel -s /var/spool/slurm file --archive=/var/backups/slurm/job-archive
The Elasticsearch backend will be revamped, as using the elastic crate is subject to a vulnerability through its hyper dependency (https://rustsec.org/advisories/RUSTSEC-2021-0078)
This will be added again once we can move to the official Elastic.co crate.
You can ship the job scripts as messages to Kafka.
For example,
./sarchive --cluster huppel -l /var/log/sarchive.log -s /var/spool/slurm/ kafka --brokers mykafka.mydomain:9092 --topic slurm-job-archival
Support for SSL and SASL is available, through the --ssl
and --sasl
options. Both of these expect a comma-separated
list of options to pass to the underlying kafka library.
We provide a build script to generate an RPM using the cargo-rpm tool. You may tailor the spec
file (listed under the .rpm
directory) to fit your needs. The RPM includes a unit file so
sarchive
can be started as a service by systemd. This file should also be changed to fit your
requirements and local configuration.