bert_create_pretraining

Crates.iobert_create_pretraining
lib.rsbert_create_pretraining
version0.1.3
sourcesrc
created_at2023-02-16 11:37:06.408881
updated_at2023-02-16 16:25:59.87023
descriptionThis crate is a Rust port of Google's BERT create pretraining data.
homepage
repository
max_upload_size
id786685
size63,742
Yiğit Bekir Kaya (yigit353)

documentation

README

bert_create_pretraining

[ API doc | crates.io ]

The crate provides the port of the original BERT create_pretraining_data.py script from the Google BERT repository.

Installation

Cargo

$ cargo install bert_create_pretraining

Usage

You can use the bert_create_pretraining binary to create the pretraining data for BERT in parallel. The binary takes the following arguments:

$ find "${DATA_DIR}" -name "*.txt" | xargs -I% -P $NUM_PROC -n 1 \
basename % | xargs -I% -P ${NUM_PROC} -n 1 \
  "${TARGET_DIR}/bert_create_pretraining" \
  --input-file="${DATA_DIR}/%" \
  --output-file="${OUTPUT_DIR}/%.tfrecord" \
  --vocab-file="${VOCAB_DIR}/vocab.txt" \
  --max-seq-length=512 \
  --max-predictions-per-seq=75 \
  --masked-lm-prob=0.15 \
  --random-seed=12345 \
  --dupe-factor=5

You can check the full list of options with the following command:

$ bert_create_pretraining --help

License

MIT license. See LICENSE file for full license.

Commit count: 0

cargo fmt