s3m

Crates.ios3m
lib.rss3m
version
sourcesrc
created_at2019-09-05 21:51:33.935077
updated_at2025-01-27 16:32:22.475459
descriptionCLI for streams of data in S3 buckets
homepagehttps://s3m.stream
repositoryhttps://github.com/s3m/s3m
max_upload_size
id162599
Cargo.toml error:TOML parse error at line 18, column 1 | 18 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include`
size0
(nbari)

documentation

https://github.com/s3m/s3m/blob/master/README.md

README

s3m

Deploy Test codecov crates.io

s3m a command-line tool for storing streams of data in s3 buckets.

Problem trying to solve

There are streams of data that can not be lost besides that when created, they degrade the performance of running systems, for example, if the stream is a backup of a database, every time the stream is produced it may lock the entire cluster (depends on the options and tools used mysqldump/xtrabackup for example), however, the effort to create the stream is proportional to the size of the database, in most cases the bigger the database is the more time and CPU and Memory are required.

In the case of backups, normally the streams are piped to a compression tool and later put in the disk, in some cases writing to the existing disk where the database is or to a remote mount endpoint, is not possible due to size constraints and the compressed backup should be streamed to am s3 bucket (X provider), therefore if for some reason the connection gets lost while streaming almost before finishing, the whole backup procedure could be corrupted and in worst scenario everything should start all over again.

The aim of s3m apart from trying to consume fewer resources is to make as much as possible "fault-tolerant" the storage procedure of the incoming stream so that even if the server lost network connectivity, the stream can be resumed and continue the upload process where it was left without the need to start all over again (no when input is from STDIN/pipe).

https://s3m.stream

Commit count: 336

cargo fmt