Crates.io | fastq |
lib.rs | fastq |
version | 0.6.0 |
source | src |
created_at | 2016-11-29 20:15:06.298247 |
updated_at | 2018-08-10 09:51:07.372856 |
description | A parser for fastq |
homepage | |
repository | https://github.com/aseyboldt/fastq-rs |
max_upload_size | |
id | 7408 |
size | 62,144 |
This library can process fastq files at about the speed of the
coreutils wc -l
(about 2GB/s on my laptop, seqan
runs at
about 150MB/s). It also makes it easy to distribute the
processing of fastq records to many cores, without losing much
of the performance.
See the documentation for details and examples.
We compare this library with the fastq parser in rust-bio
,
the C++ library seqan
2.2.0, with kseq.h
and with wc -l
.
We test 4 scenarios:
All measurements are taken with a 2GB test file (TODO describe!)
on a Haskwell i7-4510U @ 2GH. Each program is executed three
times (clearing the os page cache where appropriate) and the best
time is used. Libraries without native support for a compression
algorithm get the input via a pipe from zcat
or lz4 -d
.
The C and C++ programs are compiled with gcc 6.2.1 with the
fags -O3 -march=native
. All programs can be found in the
examples
directory of this repository.
ramdisk | lz4 | lz4 + thread | gzip | gzip + thread | |
---|---|---|---|---|---|
wc -l |
2.3GB/s | 1.2GB/s | NA | 300MB/s | NA |
fastq |
1.9GB/s | 1.9GB/s | 1.6GB/s | 650MB/s | 620MB/s |
rust-bio |
730MB/s | NA | 250MB/s | NA | NA |
seqan |
150MB/s | NA | NA | NA | NA |
kseq.h |
980MB/s | 680MB/s | NA | NA | NA |
Some notes from checking perf record
:
wc -l
and fastq
spend most of the time in memchr()
, but in contrast
to wc
, fastq
has to check that headers begin with @
and separator
lines with +
and do some more bookeeping.
lz4 -d
uses a large buffer size (default 4MB), which seems to prevent
the operating system from running lz4
and wc
concurrently when connected
by a pipe. fastq
avoids this problem with an internal queue.rust-bio
looses some time copying data and validating utf8.
The large slowdown in the threaded version stems from the fact, that it
sends each record to the other thread individually. Each send (I use a
sync_channel
from the rust stdlib) requires the use of synchronisation
primitives, and three allocations for header, sequence and quality.
Collecting records in a Vec
and sending only after a large number of
them is available speeds this up from 150MB/s to 250MB/s.seqan
is busy allocating stuff, and uses (I think) a naive
implementation of memchr()
to find line breaks.