Crates.io | tera-client-packer |
lib.rs | tera-client-packer |
version | 0.1.2 |
source | src |
created_at | 2023-10-20 22:30:50.589868 |
updated_at | 2023-10-20 23:51:51.211773 |
description | A CLI Utility to pack, compress and fragment TERA Online client files |
homepage | |
repository | https://github.com/Saegusae/tera-client-packer |
max_upload_size | |
id | 1009485 |
size | 57,153 |
A CLI utility to compress and fragment game client files for TERA Online. Allows users to download and unpack the game faster, saving bandwidth and time. The tool also exposes a rust library to perform the same actions programatically.
tera-client-packer pack [OPTIONS] <INPUT_DIR>
Arguments:
<INPUT_DIR> - Parent directory where the client files are located
Options:
-w, --workers <usize> Worker count [default: 8]
-n, --package-name <string> Output package name [default: client]
-e, --package-extension <string> Output package extension [default: cabx]
-s, --package-size <u64> Output fragment size in MB [default: 500]
-o, --output-dir <Path> Path where package files will be dumped [default: ./packed]
-c, --compress Flag for compression (unused)
tera-client-packer unpack [OPTIONS] <OUTPUT_DIR>
Arguments:
<OUTPUT_DIR> - Top Level directory where the client files will be unpacked
Options:
-i, --input-dir <Path> Input directory where package files are contained [default: ./packed]
-m, --manifest <Path> Define a custom manifest file for unpacking [default: ./_manifest.json]
-w, --workers <usize> Thread count for multithreaded use [default: 8]
The program reads around package-size * 2
amount of data for every thread, so if package size is set to 500mb
the total memory usage for 8 workers will be around 8-10 GB
.
This is a tool that packs game client files for distribution through a launcher or installer. Created for TERA online but can be used for anything really.
I've been messing with how Menma's TERA manages installation and patching through their client while learning the Rust language. Noticed a few caveats with their current implementation that puzzled me:
I currently have a working prototype for multi-threaded io and compression but it could be better optimized. The program reads files sequentially and stores the buffer for processing every package-size
bytes reached. For memory optimization purposes, the program manages a thread pool of worker_count + 1
and a buffer queue length of worker_count
. So the amount of memory used at all times will stay consistent at around worker_count * package_size * 2
which of course includes the stored task buffer and concurrently processing byte streams.
Probably could have done it more efficient with memory using some other practice but this is heaps better than running write operations sequentially.
All tests were run on client files for patch 100.02, clean Gameforge release with ReleaseRevision.txt md5 hash 0396410868EDE6E05F8DEDC5142E93EB
and package-size
option set to 500mb
Runtime | Compression | Duration | Result |
---|---|---|---|
Single-Threaded | No Compression | 1m37s | 66.9 GB (100.00%) |
Single-Threaded | Deflate | 2h32m44s | 59.5 GB (88.94%) |
Multi-Threaded (16 Threads) | Gzip (Defaults) | 37m18s | 58.6 GB (87.29%) |
* Multi-Threaded (16 Threads) | Gzip (Defaults) | 3m33s | 58.6 GB (87.29%) |
* Optimised release target build
Runtime | Compression | Duration |
---|---|---|
Multi-Threaded (16 Threads) | Gzip (Lv. 6) | 5m50s |
package-size * workers * 2.10
MB of memory, use these options with caution.cwd
and _manifest.json
respectively