| Crates.io | eventdbx |
| lib.rs | eventdbx |
| version | 3.19.4 |
| created_at | 2025-10-17 20:42:18.16812+00 |
| updated_at | 2025-12-07 02:36:26.130989+00 |
| description | Immutable, event-sourced, nosql, write-side database system. |
| homepage | https://github.com/eventdbx/eventdbx |
| repository | https://github.com/eventdbx/eventdbx |
| max_upload_size | |
| id | 1888388 |
| size | 1,647,613 |
You’ll appreciate this database system. EventDBX is extremely fast, and it lets you spend less time designing schemas and more time writing the code that drives your application.
EventDBX is an event-sourced, NoSQL write-side database designed to provide immutable, append-only storage for events across various domains. It is ideal for applications requiring detailed audit trails for compliance, complex business processes involving states, and high data integrity levels. The core engine focuses on the write side of CQRS—capturing and validating events, persisting aggregate state, and ensuring integrity with Merkle trees.
The companion plugin framework turns those events into durable jobs and delivers them to other services, letting external systems specialise in the read side. Whether you need to hydrate a search index, stream to analytics, feed caches, or trigger workflows, plugins let you extend EventDBX without altering the write path. Each plugin chooses the payload shape it needs (event-only, state-only, schema-only, or combinations) while the queue guarantees delivery and backoff.
Follow the steps below to spin up EventDBX locally. You can run every command with npx (no global install required); if you're contributing to the project, clone the repository and work from the repo root instead.
The CLI installs as dbx. Older releases exposed an eventdbx alias, but the primary command is now dbx.
Start the server
npm install eventdbx -g
dbx start --foreground
--foreground to daemonise the process.--data-dir <path> to override the default $HOME/.eventdbx directory.default; switch to --restrict=off for permissive prototyping or --restrict=strict to require declared schemas on every write.Switch domains (optional)
dbx checkout -d herds
default domain, which stores data directly under the configured data_dir.dbx checkout -d <domain> (or dbx checkout <domain>) changes the active domain and creates isolated storage in <data_dir>/domains/<domain> so you can group aggregates and plugins per bounded context.Define a schema (recommended when running in restricted mode)
dbx schema create person \
--events person_created,person_updated \
--snapshot-threshold 100
Omit --snapshot-threshold to inherit the default configured in config.toml (if any).
EventDBX stores schemas in ~/.eventdbx/data/schemas.json (or the data_dir you configured). Inspect the full definition at any time with dbx schema <aggregate>; the command prints the JSON the server enforces. For example, a person aggregate that validates email format and enforces name lengths can be declared as:
dbx schema create person --events person_created,person_updated
dbx schema person
{
"person": {
"aggregate": "person",
"snapshot_threshold": null,
"locked": false,
"field_locks": [],
"hidden": false,
"hidden_fields": [],
"column_types": {
"email": { "type": "text", "required": true, "format": "email" },
"name": {
"type": "text",
"rules": { "length": { "min": "1", "max": "64" } }
}
},
"events": {
"person_created": {
"fields": ["first_name", "last_name"]
},
"person_updated": {
"fields": []
}
},
"created_at": "2025-10-26T22:25:24.028129Z",
"updated_at": "2025-10-26T22:27:25.967615Z"
}
}
column_types declare data types, validation formats, and rule blocks per field. Rules can enforce length, numeric ranges, regexes, or cross-field matches.events.<event>.fields restrict the properties an event may set; leaving the list empty keeps the event permissive.field_locks and hidden_fields control which fields can be updated or returned in aggregate detail calls.locked: true freezes the schema to prevent further event writes until it is unlocked.When you need hot reloads or historical context, snapshot schema changes per tenant:
dbx tenant schema publish <tenant> [--activate] [--reason <text>] captures the current schemas.json, writes it to schemas/versions/<id>.json, and records metadata in schemas/schema_manifest.json under the tenant’s data directory. (Omit <tenant> to target whichever domain is currently active.) Prefer dbx schema publish … if you’re already in the schema workflow—the commands are identical and simply default the tenant from the active domain.dbx tenant schema history <tenant> [--json] [--audit] prints every recorded version plus the audit trail of publish/activate/rollback events.dbx tenant schema diff <tenant> --from <version> --to <version> [--json] [--style patch|unified|split] [--color auto|always|never] emits either a JSON Patch, a GitHub-style unified diff, or a side-by-side split view. Use --color always to force green additions / red removals (default auto enables color only when stdout is a TTY).dbx tenant schema activate|rollback <tenant> --version <id> advances or rewinds the active pointer. Include --no-reload if the daemon is offline; otherwise the CLI tells the server to evict and reload that tenant’s schema cache immediately.dbx tenant schema reload <tenant> forces the running daemon to drop its cached schema/context for that tenant—useful after manual edits or when you disable automatic reloads.Issue a token for CLI access
dbx token generate --group admin --user jane --expiration 3600
--tenant <id> (repeat the flag for multiple tenants) to bind the token to specific tenant ids. The server rejects any request where the supplied tenantId does not appear in the token’s claims.dbx token bootstrap to ensure ~/.eventdbx/cli.token exists, or append --stdout to print a one-off token without persisting it (pass --persist alongside --stdout if you still want the file updated). Bootstrap tokens expire after two hours by default; supply --ttl <seconds> (for example --ttl 0 to disable) if you must override the expiration.Append an event
When the server is running the CLI proxies writes through the control socket automatically. Pass a token with --token or set EVENTDBX_TOKEN to reuse your own credentials; otherwise the CLI mints a short-lived token for the call.
dbx aggregate apply person p-002 person_updated \
--field name="Jane Doe" \
--field status=active
If the server is stopped, the same CLI command writes to the local RocksDB store directly.
Inspect recent history at any point with dbx events --filter 'payload.status = "active"' --sort version:desc --take 5 or drill into the payload of a specific event via dbx events <snowflake_id> --json.
Replicate a domain (optional)
dbx checkout -d remote1 --remote <host>:6363 --token "<remote_token>"
dbx checkout stores remote settings per domain (remote.json under the domain data directory). You can re-run the command with just --remote or --token to rotate either value.
When the remote hosts multiple tenants, pass --remote-tenant <id> so subsequent dbx push/dbx pull calls know which tenant to target.
Push schemas first so the destination validates incoming events with the same rules:
dbx push schema remote1
dbx push schema remote1 --publish --publish-reason "rollout #42"
Add --publish to snapshot and activate the schemas on the remote immediately (with optional --publish-label, --publish-force, or --publish-no-reload flags).
Mirror domain data to the remote. Limit the sync to a specific aggregate type or identifier when you need a targeted replication:
dbx push remote1
dbx push remote1 --aggregate invoice
dbx push remote1 --aggregate ledger --id ledger-001
Pull data back down when the remote captured new events. The pull command performs the same integrity checks (version counts, Merkle root parity) before importing:
dbx pull remote1
dbx pull remote1 --aggregate invoice
Pushing or pulling aborts if either side has diverging history for an aggregate. Use the --concurrency <threads> flag to tune throughput when replicating large domains.
Automate replication with the built-in scheduler:
dbx watch remote1 --mode push --interval 120 --background
dbx watch remote1 --mode bidirectional --aggregate ledger --run-once
dbx watch remote1 --skip-if-active
dbx watch status remote1
watch loops forever (or until --run-once), triggering a push, pull, or bidirectional cycle every --interval seconds. Pass --background to daemonize, --skip-if-active to avoid overlapping runs when another watcher is working on the same domain, and inspect persisted state at any time with dbx watch status <domain> (use --all for a summary of every watcher).
Enable multi-tenant mode in config.toml (set [tenants] multi_tenant = true) to hash tenants across shard directories. EventDBX automatically hashes tenants when no manual assignment exists, and you can override placements with the dbx tenant commands:
dbx tenant assign people --shard shard-0003
dbx tenant unassign sandbox
dbx tenant list --json
dbx tenant stats
Assignments live in a dedicated RocksDB directory (tenant_meta/ under your data root). Unassigned tenants continue to hash across the configured shard count.
You now have a working EventDBX instance with an initial aggregate. Explore the Command-Line Reference for the full set of supported operations.
# Pull the latest published release
docker pull thachp/eventdbx:tagname
# Start the daemon with persistent storage and required ports
docker run \
--name eventdbx \
--detach \
--publish 7070:7070 \
--publish 6363:6363 \
--volume "$PWD/data:/var/lib/eventdbx" \
thachp/eventdbx:tagname
// Use the JavaScript/TypeScript client (install via `pnpm add eventdbxjs`).
// Other languages can interact through plugins or HTTP bridges.
import { createClient } from 'eventdbxjs';
const client = createClient({
ip: process.env.EVENTDBX_HOST,
port: Number(process.env.EVENTDBX_PORT) || 6363,
token: process.env.EVENTDBX_TOKEN,
verbose: true, // should match verbose_responses = false on the server config file
});
await client.connect();
// create an aggregate
const snapshot = await client.create('person', 'p-110', 'person_registered', {
payload: { name: 'Jane Doe', status: 'active' },
metadata: { '@source': 'seed-script' },
note: 'seed aggregate',
});
console.log('created aggregate version', snapshot.version);
// apply an event
await client.apply('person', 'p-110', 'person_contact_added', {
payload: { name: 'Jane Doe', status: 'active' },
metadata: { note: 'seed data' },
});
// return a list of events of a p-110
const history = await client.events('person', 'p-110');
console.log('event count:', history.length);
dbx watch (looping scheduler with --interval, --run-once, and optional --background)./metrics endpoint reports HTTP traffic and plugin queue health so you can wire EventDBX into Grafana, Datadog, or any other monitoring stack out of the box.[auth] in config.toml; keep private_key secret and distribute public_key to services that need to validate them. This approach allows for precise control over who can access and modify data, protecting against unauthorized changes.tokens.json are encrypted transparently when a DEK is configured. Metadata such as aggregate identifiers, versions, and Merkle roots remain readable so plugins and integrity checks keep working without additional configuration.email, url, wgs_84, and more), rule blocks for length, range, regex, required fields, and nested properties, plus strict/relaxed enforcement modes.EventDBX can run in three validation modes, tuned for different phases of development:
--restrict=off or --restrict=false): Ideal for prototyping and rapid application development. Event payloads bypass schema validation entirely, letting you iterate quickly without pre-registering aggregates, tables, or column types.--restrict=default or --restrict=true): Validates events whenever a schema exists but allows aggregates without a declared schema. This matches prior behaviour and suits teams rolling out schema enforcement incrementally.--restrict=strict): Requires every aggregate to have a schema before events can be appended. Missing schemas fail fast with a clear error so production environments stay aligned with their contracts.EventDBX ships a single dbx binary. Every command accepts an optional --config <path> to point at an alternate configuration file.
dbx start [--port <u16>] [--data-dir <path>] [--foreground] [--restrict[=<off|default|strict>]]--restrict=off to bypass validation or --restrict=strict to require schemas up front.dbx stopdbx statusdbx restart [start options…]dbx destroy [--yes]--yes).dbx checkout -d <domain>default domain by default and creates per-domain data roots under <data_dir>/domains/<domain>.dbx merge --from <domain> [--into <domain>] [--overwrite-schemas]--overwrite-schemas; existing aggregates in the target abort the merge to prevent data loss.dbx config [--port <u16>] [--data-dir <path>] [--cache-threshold <usize>] [--dek <base64>] [--list-page-size <usize>] [--page-limit <usize>] [--plugin-max-attempts <u32>] [--snapshot-threshold <u64>] [--clear-snapshot-threshold]--dek (32 bytes of base64). --list-page-size sets the default page size for aggregate listings (default 10), --page-limit caps any requested page size across list and event endpoints (default 1000), and --plugin-max-attempts controls how many retries are attempted before an event is marked dead (default 10).dbx token generate --group <name> --user <name> [--expiration <secs>] [--limit <writes>] [--keep-alive]dbx token listdbx token revoke --token <value>dbx token refresh --token <value> [--expiration <secs>] [--limit <writes>]dbx schema create <name> --events <event1,event2,...> [--snapshot-threshold <u64>]dbx schema add <name> --events <event1,event2,...>dbx schema remove <name> <event>dbx schema annotate <name> <event> [--note <text>] [--clear]dbx schema listSchemas are stored on disk; when restriction is default or strict, incoming events must satisfy the recorded schema (and strict additionally requires every aggregate to declare one).
dbx aggregate create --aggregate <type> --aggregate-id <id> --event <name> [--field KEY=VALUE...] [--payload <json>] [--metadata <json>] [--note <text>] [--token <value>] [--json]dbx aggregate apply --aggregate <type> --aggregate-id <id> --event <name> --field KEY=VALUE... [--payload <json>] [--stage] [--token <value>] [--note <text>]--stage to queue it for a later commit.dbx aggregate patch --aggregate <type> --aggregate-id <id> --event <name> --patch <json> [--stage] [--token <value>] [--metadata <json>] [--note <text>]dbx aggregate list [--cursor <token>] [--take <n>] [--stage]--stage to display queued events instead.dbx aggregate get --aggregate <type> --aggregate-id <id> [--version <u64>] [--include-events]dbx aggregate verify --aggregate <type> --aggregate-id <id>dbx aggregate snapshot --aggregate <type> --aggregate-id <id> [--comment <text>]dbx aggregate archive --aggregate <type> --aggregate-id <id> [--comment <text>]dbx aggregate restore --aggregate <type> --aggregate-id <id> [--comment <text>]dbx aggregate remove --aggregate <type> --aggregate-id <id> Removes an aggregate that has no events (version still 0).dbx aggregate commitAggregate sorting currently accepts aggregate_type, aggregate_id, archived, created_at, and updated_at fields (with optional :asc/:desc suffixes). Sorting by version has been removed to keep the CLI aligned with the indexed columns.
Cursor pagination tokens are human-readable: active aggregates encode as a:<aggregate_type>:<aggregate_id> (r: for archived), and event cursors append the event version (a:<aggregate_type>:<aggregate_id>:<version>). Grab the last row from a page, form its token, and pass it via --cursor to resume listing.
Timestamp-sorted aggregate listings use ts:<field>:<order>:<scope>:<timestamp_ms>:<aggregate_type>:<aggregate_id> tokens (field is created_at or updated_at, order is asc|desc, and scope is a or r). Take the timestamp from the last row (in milliseconds) to build the cursor for the next page; you can also pass the shorthand ts:<aggregate_type>:<aggregate_id> and the CLI/control clients (including eventdbxjs) will derive the rest when combined with --sort created_at|updated_at.
dbx events [--aggregate <type>] [--aggregate-id <id>] [--cursor <token>] [--take <n>] [--filter <expr>] [--sort <field[:order],...>] [--json] [--include-archived|--archived-only]payload.status = "open" AND metadata.note LIKE "retry%"), and multi-key sorting. Prefix fields with payload., metadata., or extensions. to target nested JSON; created_at, event_id, version, and other top-level keys are also available.dbx events --event <snowflake_id> [--json]--event when the first positional argument is a valid Snowflake id.dbx aggregate export [<type>] [--all] --output <path> [--format csv|json] [--zip] [--pretty]--zip to bundle the output into an archive.--payload lets you provide the full event document explicitly; use dbx aggregate patch when you need to apply an RFC 6902 JSON Patch server-side.
Every aggregate command ultimately turns into a small set of RocksDB reads or writes. Ballpark complexities (assume hot data in cache):
aggregate list: iterates over the requested page of aggregates. Runtime is O(k) in the page size (default list_page_size) when you follow the natural key order (type → id) or stick to single-field timestamp sorts; arbitrary filters or multi-field sorts fall back to an O(N) scan because predicates are evaluated in memory after loading each aggregate.aggregate get: one state lookup (O(log N)) plus optional event scanning when you request --include-events or --version (adds O(Eₐ) for that aggregate’s events) and lightweight JSON parsing of the returned map.aggregate select: uses the same state lookup as get (O(log N)) and walks each requested dot path in-memory; no additional RocksDB reads are taken, so cost is dominated by the selected payload size.aggregate apply: validates the payload, merges it into the materialized state, and appends the event in a single batch write. Time is proportional to the payload size being processed.aggregate patch: reads the current state (same cost as get), applies the JSON Patch document, then appends the result—effectively O(payload + patch_ops).aggregate verify: recomputes the Merkle root for the aggregate’s events (O(Eₐ)).In practice those costs are dominated by payload size and the number of events you ask the CLI to stream; hot aggregates tend to stay in the RocksDB block cache, keeping per-operation latency close to constant.
| Operation | Time complexity | Notes |
|---|---|---|
aggregate list |
O(k) / O(N) | O(k) for cursor-order or timestamp-indexed sorts; O(N) when filters/sorts can’t use an index. |
aggregate get |
O(log N + Eₐ + P) | Single state read plus optional event scan and JSON parsing. |
aggregate select |
O(log N + P_selected) | Same state read as get; dot-path traversal happens in memory. |
aggregate apply |
O(P) | Payload validation + merge + append in one RocksDB batch. |
aggregate patch |
O(log N + P + patch_ops) | Reads state, applies JSON Patch, then appends the patch payload. |
aggregate verify |
O(Eₐ) | Recomputes the Merkle root across the aggregate’s events. |
Staged events are stored in .eventdbx/staged_events.json. Use aggregate apply --stage to add entries to this queue, inspect them with aggregate list --stage, and persist the entire batch with aggregate commit. Events are validated against the active schema whenever restriction is default or strict; the strict mode also insists that a schema exists before anything can be staged. The commit operation writes every pending event in one RocksDB batch, guaranteeing all-or-nothing persistence.
dbx plugin install <plugin> <version> --source <path|url> [--bin <file>] [--checksum <sha256>] [--force]dbx plugin config tcp --name <label> --host <hostname> --port <u16> [--payload <all|event-only|state-only|schema-only|event-and-schema>] [--disable]dbx plugin config http --name <label> --endpoint <host|url> [--https] [--header KEY=VALUE]... [--payload <all|event-only|state-only|schema-only|event-and-schema>] [--disable]dbx plugin config log --name <label> --level <trace|debug|info|warn|error> [--template "text with {aggregate} {event} {id}"] [--payload <all|event-only|state-only|schema-only|event-and-schema>] [--disable]dbx plugin config process --name <instance> --plugin <id> --version <semver> [--arg <value>]... [--env KEY=VALUE]... [--working-dir <path>] [--payload <all|event-only|state-only|schema-only|event-and-schema>] [--disable]dbx plugin config capnp --name <label> --host <hostname> --port <u16> [--payload <all|event-only|state-only|schema-only|event-and-schema>] [--disable]dbx plugin enable <label>dbx plugin disable <label>dbx plugin remove <label>dbx plugin testdbx plugin listdbx queuedbx queue cleardbx queue retry [--event-id <job-id>]dbx plugin replay <plugin-name> <aggregate> [<aggregate_id>] [--payload-mode <all|event-only|state-only|schema-only|event-and-schema|extensions-only>]Plugins consume jobs from a durable RocksDB-backed queue. EventDBX enqueues a job for every aggregate mutation, and each plugin can opt into the data it needs—event payloads, materialized state, schemas, or combinations thereof. Clearing dead entries prompts for confirmation to avoid accidental removal. Manual retries run the failed jobs immediately; use --event-id to target a specific entry.
dbx upgrade [<version>|latest] [--no-switch] [--print-only]~/.eventdbx/versions/<target>/<tag>/ and switches the active dbx binary unless --no-switch is supplied. The CLI looks up releases from GitHub, so you can use latest or supply a tag like v1.13.2 (omitting the leading v also works). Versions lower than v1.13.2 are rejected because upgrades are unsupported before that release. Use --print-only to preview the download and activation steps without performing them. Shortcut syntax dbx upgrade@<version> resolves through the same lookup.dbx upgrade use <version> [--print-only]nvm. The command references the locally cached binary; on Windows the switch must still be completed manually by replacing the executable.dbx upgrade installed [--json]--json for machine-readable output that includes the detected target triple.dbx upgrade --suppress <version>latest to ignore the current release until a newer one ships. Use dbx upgrade --clear-suppress to re-enable reminders for all releases.dbx upgrade list [--limit <n>] [--json][installed] and the active release with [active]. The default limit is 20; use --json for a machine-readable list, and call dbx upgrade@list for the same shortcut.The CLI checks for new releases on startup and prints a reminder when a newer version is available. Set
DBX_NO_UPGRADE_CHECK=1to bypass the check for automation scenarios.
dbx backup --output <path> [--force]dbx restore --input <path> [--data-dir <path>] [--force]--data-dir to override the stored location, and --force to overwrite non-empty destinations. The server must be stopped before restoring.Plugins fire after every committed event to keep external systems in sync. Remaining plugins deliver events through different channels:
EventRecord to the configured socket.EventRecord JSON to an endpoint with optional headers; add --https during configuration to force HTTPS when the endpoint lacks a scheme.tracing at the configured level. By default: aggregate=<type> id=<id> event=<event>.dbx-plugins workspace or your own extensions.Process plugins are distributed as zip/tar bundles. Install them with dbx plugin install <plugin> <version> --source <path-or-url>—the bundle is unpacked to ~/.eventdbx/plugins/<plugin>/<version>/<target>/, where <target> matches the current OS/architecture (for example, x86_64-apple-darwin). Official bundles live in the dbx-plugins releases; pass the asset URL to --source or point it at a local build while developing. After installation, bind the binary to an instance:
dbx plugin config process \
--name search \
--plugin dbx_search \
--version 1.0.0 \
--arg "--listen=0.0.0.0:8081" \
--env SEARCH_CLUSTER=https://example.com
dbx plugin enable search
EventDBX supervises the subprocess, restarts it on failure, and delivers every EventRecord/aggregate snapshot over the stream. Plugins that run as independent services can continue to use the TCP/HTTP emitters instead.
Failed deliveries are automatically queued and retried with exponential backoff. The server keeps attempting until the plugin succeeds or the aggregate is removed, ensuring transient outages do not drop notifications. Use dbx queue to inspect pending/dead event IDs.
Plugin configurations are stored in .eventdbx/plugins.json. Each plugin instance requires a unique --name so you can update, enable, disable, remove, or replay it later. plugin enable validates connectivity (creating directories, touching files, or checking network access) before marking the plugin active. Remove a plugin only after disabling it with plugin disable <name>. plugin replay resends stored events for a single aggregate instance—or every instance of a type—through the selected plugin; include --payload-mode to temporarily override the configured payload shape for the replay.
Need point-in-time snapshots instead of streaming plugins? Use dbx aggregate export to capture aggregate state as CSV or JSON on demand.
Example HTTP/TCP payload (EventRecord):
{
"aggregate_type": "patient",
"aggregate_id": "p-001",
"event_type": "patient-updated",
"payload": {
"status": "inactive",
"comment": "Archived via API"
},
"extensions": {
"@analytics": {
"correlation_id": "rest-1234"
}
},
"metadata": {
"event_id": "1234567890123",
"created_at": "2024-12-01T17:22:43.512345Z",
"issued_by": {
"group": "admin",
"user": "jane"
}
},
"version": 5,
"hash": "cafe…",
"merkle_root": "deadbeef…"
}
Heads-up for plugin authors
The dbx-plugins surfaces now receiveevent_idvalues as Snowflake strings and may optionally see anextensionsobject alongsidepayload. Update custom handlers to treatmetadata.event_idas a stringified Snowflake and to ignore or consume the newextensionsenvelope as needed.
Each entry in the column_types map declares both a storage type and the rules EventDBX enforces. Supported types:
| Type | Accepted input | Notes |
|---|---|---|
integer |
JSON numbers or strings that parse to a signed 64-bit integer | Rejects values outside the i64 range. |
float |
JSON numbers or numeric strings | Stored as f64; scientific notation is accepted. |
decimal(precision,scale) |
JSON numbers or strings | Enforces total digits ≤ precision and fractional digits ≤ scale. |
boolean |
JSON booleans, 0 / 1, or "true", "false", "1", "0" |
Values are normalised to true / false. |
text |
UTF-8 strings | Use length, contains, or regex rules for additional constraints. |
timestamp |
RFC 3339 timestamps as strings | Normalised to UTC. |
date |
YYYY-MM-DD strings |
Parsed as a calendar date without a timezone. |
json |
Any JSON value | No per-field validation is applied; use when you want to store free-form payloads. |
binary |
Base64-encoded strings | Decoded to raw bytes before validation; length counts bytes after decoding. |
object |
JSON objects | Enable nested validation via the properties rule (see below). Extra keys not listed remain untouched. |
Rules are optional and can be combined when the target type supports them:
required: the field must be present in every event payload.contains / does_not_contain: case-sensitive substring checks for text fields.regex: one or more regular expressions that text fields must satisfy.format: built-in string validators; choose email, url, credit_card, country_code (ISO 3166-1 alpha-2), iso_8601 (RFC 3339 timestamp), wgs_84 (latitude/longitude in decimal degrees), camel_case, snake_case, kebab_case, pascal_case, or upper_case_snake_case.length: { "min": <usize>, "max": <usize> } bounds the length of text (characters) or binary (decoded bytes).range: { "min": <value>, "max": <value> } for numeric and temporal types (integer, float, decimal, timestamp, date). Boundary values must parse to the column’s type.properties: nested column_types definitions for object columns, enabling recursion with the same rule set as top-level fields.Use dbx schema field <aggregate> <field> … to manage types and rules without editing schemas.json. It can set --type <text|integer|…>, toggle --required/--not-required, enforce --format <email|url|…>, swap --regex / --contains lists, adjust --length-min / --length-max, feed JSON rule blocks via --rules @rules.json or --properties @object_rules.json, and clear definitions (--clear-type, --clear-rules, --clear-format, etc.). Pair it with dbx schema alter <aggregate> <event> to append/remove event field allow-lists or replace them entirely via --set/--clear.
dbx schema field person email --type text --format email --required
dbx schema field person status --regex '^(pending|active|blocked)$'
dbx schema field person profile --type object --properties @profile_rules.json
dbx schema alter person person_created --add first_name,last_name
dbx schema alter person person_updated --set status,comment
dbx schema alter person person_updated --clear
Performance benchmarks and workload scenarios live in the eventdbx-perf repository. Clone that project to review the current load profiles, adjust parameters, or submit new variations as you evaluate throughput and latency trade-offs.
All tests executed on the same host using Docker-based databases, single-thread client, datasets up to 10 M records. Latency ≈ mean operation time; throughput = operations per second (converted from ns → µs).
| Engine | Typical Throughput (ops/s) | Typical Latency (µs) | Scaling Trend | Summary & Observation |
|---|---|---|---|---|
| EventDBX | 1 400 – 2 000 | 0.5 – 0.8 | Flat (1 K → 10 M) | RocksDB-based append-only core with hot-aggregate caching keeps performance nearly constant. Excels at write-heavy, event-sourced workloads with verifiable integrity. |
| PostgreSQL 15 | 1 000 – 1 900 | 0.6 – 1.0 | Stable / slightly variable | Strong transactional baseline; planner + WAL sync add moderate overhead. Excellent for mixed OLTP queries but heavier per-event cost. |
| MongoDB 7 | 400 – 1 000 | 1.5 – 2.5 | Gradual decline with size | Flexible JSON-document store with moderate efficiency. Serialization and journaling add overhead; roughly half of EventDBX throughput. |
| SQL Server 2022 | 50 – 180 | 5 – 20 | Drops quickly > 100 K | High latency and lowest throughput; B-tree structure and lock coordination dominate under load. Best suited for transactional consistency, not high-velocity writes. |
EventDBX delivers the highest sustained throughput and lowest latency, outperforming PostgreSQL by ~2×, MongoDB by ~3×, and SQL Server by >10×—while maintaining sub-microsecond response times even at multi-million-record scale.
develop.feat: add new feature or fix: correct a bug).develop branch of the original repo. Describe your changes and link to any related issues..prettierrc.json; use it for Markdown/JSON/YAML changes..commitlintrc.json.© 2025 Patrick Thach and contributors
EventDBX is open source software licensed under the MIT License.
See the LICENSE file in the repository for details.