x-log4rs-sqlite

Crates.iox-log4rs-sqlite
lib.rsx-log4rs-sqlite
version0.1.0
sourcesrc
created_at2023-09-14 22:00:30.7601
updated_at2024-03-07 21:25:34.221477
descriptionExperimental log4rs appender that saves logs in SQLite database
homepage
repositoryhttps://github.com/mpietrzak/x-log4rs-sqlite
max_upload_size
id973182
size8,790
Maciej (mpietrzak)

documentation

README

x-log4rs-sqlite

Experimental log4rs appender that writes to SQLite database.

Not production ready, but might be useful if you want to process logs in programmatic way, and you can survive with the performance cost.

Usage

To use custom appender through config file (e.g. log4rs.yaml), you'll have to register the appender in the config deserializer and then reference the appender in the config file.

Currently the log entries are buffered in memory in the appender before being written into the DB in batches. This happens in a blocking way in a thread that writes to the log, which means that logging this way is quite costly.

The outstanding buffered entries need to be flushed explicitly using log::logger().flush(), otherwise they'd be lost.

The buffer size is currently not configurable and hardcoded to value 1024.

Example minimal program:

use log::info;

fn main() -> anyhow::Result<()> {
    let deserializers = {
        let mut d = log4rs::config::Deserializers::new();
        d.insert("sqlite", x_log4rs_sqlite::SqliteLogAppenderDeserializer {});
        d
    };
    log4rs::init_file("log4rs.yaml", deserializers)?;
    info!("hello, world");
    log::logger().flush();
    Ok(())
}

Sample log4rs.yaml config file:

appenders:
  sqlite:
    kind: sqlite
    path: log.sqlite
root:
  level: debug
  appenders:
    - sqlite

Schema

If the DB file does not exist, it will be created with a default schema, which is currently:

create table if not exists entry (
    id varchar(128) not null primary key,
    ts varchar(128) not null,
    level varchar(128) not null,
    message varchar(8192) not null
);

create index if not exists entry_ts_i on entry (ts);

The fields are:

  • id: uuid of the entry, generated by the appender at log time,
  • ts: timestamp of the log entry as string, computed by the appender, in format 2023-09-23 19:20:30.401272, at UTC timezone, but without any timezone indicator, with microsecond precision,
  • level: the log4rs log level as string,
  • message: the log entry message as string.

Sample data:

sqlite> select id, level, ts, message from entry order by ts desc, id desc limit 5;
78a00529-625f-4d3c-af74-806b7c74b955|INFO|2023-09-26 18:47:26.413391|test log message 999999
09a402b5-ae3f-43ef-91ce-fe6e6c9e6fdb|INFO|2023-09-26 18:47:26.413386|test log message 999998
a906fc6c-44b1-4147-a268-8ea88aac119b|INFO|2023-09-26 18:47:26.413381|test log message 999997
151240f6-75c2-4163-85ba-ff316bffc028|INFO|2023-09-26 18:47:26.413375|test log message 999996
41f3456a-ea9e-488c-a6d8-7b171ee909a3|INFO|2023-09-26 18:47:26.413370|test log message 999995

If the log file exists, but its schema is not compatible with above, then logs will be lost and log4rs will print an error message to standard output at log write time.

The DB schema, as the whole crate, should be considered experimental and unstable and can change between versions in incompatible ways without warning.

Commit count: 10

cargo fmt