Crates.io | fast_log |
lib.rs | fast_log |
version | 1.6.16 |
source | src |
created_at | 2020-02-29 13:08:57.416153 |
updated_at | 2024-03-09 16:48:44.140022 |
description | Rust async log High-performance asynchronous logging |
homepage | |
repository | https://github.com/rbatis/fast_log |
max_upload_size | |
id | 213779 |
size | 85,548 |
the fast log . This crate uses #! [forbid(unsafe_code)] to ensure everything is implemented in 100% Safe Rust.
A log implementation for extreme speed, using Crossbeam/channel ,once Batch write logs,fast log date, Appender architecture, appender per thread
Low overhead, log write based on thread, also support tokio/Future
High performance, use lockless message queue, log is stored in queue, then flush disk. It does not block the caller
Full APPEND mode file writing, high efficiency for solid state/mechanical disk (solid state and mechanical disk sequential write performance is better than random write)
When channel pressure increases, logs can be written in batches at a time
Built-in ZIP compression, compressed file name date + serial number, no need to worry about the log file is too large
Built-in log segmentation, custom log full number of immediately split
Built-in filtering configuration support, can be customized to filter out other library printed logs
Support custom compression algorithms, such as ZIP and LZ4
Support use log::logger().flush()
method wait to flush disk
Simple and efficient Appender architecture.Both configuration and customization are simple
Support custom file type, just like mmap/file
-----------------
log data-> | main channel(crossbeam) | ->
-----------------
---------------- ----------------------
-> |thread channel)| -> background thread | appender1 |
---------------- ----------------------
---------------- ----------------------
-> |thread channel)| -> background thread | appender2 |
---------------- ----------------------
---------------- ----------------------
-> |thread channel)| -> background thread | appender3 |
---------------- ----------------------
---------------- ----------------------
-> |thread channel)| -> background thread | appender4 |
---------------- ----------------------
How fast is?
no flush(chan_len=1000000) benches/log.rs
//MACOS(Apple M1MAX-32GB)
test bench_log ... bench: 85 ns/iter (+/- 1,800)
//MACOS(Apple M1MAX-32GB)
test bench_log ... bench: 323 ns/iter (+/- 0)
log = "0.4"
fast_log = {version = "1.5"}
or enable zip/lz4/gzip Compression library
log = "0.4"
# "lz4","zip","gzip"
fast_log = {version = "1.5" , features = ["lz4","zip","gzip"]}
chan_len(Some(100000))
Preallocating channel memory reduces the overhead of memory allocation,for example:use log::{error, info, warn};
fn main(){
fast_log::init(Config::new().file("target/test.log").chan_len(Some(100000))).unwrap();
log::info!("Commencing yak shaving{}", 0);
}
use log::{error, info, warn};
fn main(){
fast_log::init(Config::new().console().chan_len(Some(100000))).unwrap();
log::info!("Commencing yak shaving{}", 0);
}
use log::{error, info, warn};
fn main(){
fast_log::init(Config::new().console().chan_len(Some(100000))).unwrap();
fast_log::print("Commencing print\n".into());
}
use fast_log::{init_log};
use log::{error, info, warn};
fn main(){
fast_log::init(Config::new().file("target/test.log").chan_len(Some(100000))).unwrap();
log::info!("Commencing yak shaving{}", 0);
info!("Commencing yak shaving");
}
use fast_log::plugin::file_split::RollingType;
use fast_log::consts::LogSize;
use fast_log::plugin::packer::LogPacker;
#[test]
pub fn test_file_compation() {
fast_log::init(Config::new()
.console()
.chan_len(Some(100000))
.file_split("target/logs/",
LogSize::MB(1),
RollingType::All,
LogPacker{})).unwrap();
for _ in 0..200000 {
info!("Commencing yak shaving");
}
log::logger().flush();
}
use fast_log::config::Config;
use fast_log::consts::LogSize;
use fast_log::plugin::file_mmap::MmapFile;
use fast_log::plugin::file_split::RollingType;
use fast_log::plugin::packer::LogPacker;
fn main() {
fast_log::init(
Config::new()
.chan_len(Some(100000))
.console()
.split::<MmapFile, LogPacker>(
"target/logs/temp.log",
LogSize::MB(1),
RollingType::All,
LogPacker {},
),
)
.unwrap();
for _ in 0..40000 {
log::info!("Commencing yak shaving");
}
log::logger().flush();
println!("you can see log files in path: {}", "target/logs/");
}
use fast_log::{LogAppender};
use log::{error, info, warn};
pub struct CustomLog{}
impl LogAppender for CustomLog{
fn do_log(&mut self, record: &FastLogRecord) {
print!("{}",record);
}
}
fn main(){
let wait = fast_log::init(Config::new().custom(CustomLog {}).chan_len(Some(100000))).unwrap();
info!("Commencing yak shaving");
log::logger().flush();
}