| Crates.io | datafusion-ducklake |
| lib.rs | datafusion-ducklake |
| version | 0.0.4 |
| created_at | 2025-10-24 15:47:37.815358+00 |
| updated_at | 2026-01-14 05:59:03.65951+00 |
| description | DuckLake query engine for rust, built with datafusion. |
| homepage | https://github.com/hotdata-dev/datafusion-ducklake |
| repository | https://github.com/hotdata-dev/datafusion-ducklake |
| max_upload_size | |
| id | 1898620 |
| size | 1,156,370 |
This is an early pre-release and very much a work in progress.
A DataFusion extension for querying DuckLake. DuckLake is an integrated data lake and catalog format that stores metadata in SQL databases and data as Parquet files on disk or object storage.
The goal of this project is to make DuckLake a first-class, Arrow-native lakehouse format inside DataFusion.
data_path, schema, table, file)information_schema for catalog metadata (snapshots, schemas, tables, columns, files)ducklake_snapshots(), ducklake_table_info(), ducklake_list_files(), ducklake_table_changes()This project is under active development. The roadmap below reflects major areas of work currently underway or planned next. For the most up-to-date view, see the open issues and pull requests in this repository.
| Feature | Description | Default |
|---|---|---|
metadata-duckdb |
DuckDB catalog backend | ✅ |
metadata-postgres |
PostgreSQL catalog backend | |
metadata-mysql |
MySQL catalog backend | |
metadata-sqlite |
SQLite catalog backend | |
encryption |
Parquet Modular Encryption (PME) support |
# DuckDB only (default)
cargo build
# PostgreSQL only
cargo build --no-default-features --features metadata-postgres
# MySQL only
cargo build --no-default-features --features metadata-mysql
# SQLite only
cargo build --no-default-features --features metadata-sqlite
# All backends
cargo build --features metadata-postgres,metadata-mysql,metadata-sqlite
# DuckDB catalog
cargo run --example basic_query -- catalog.db "SELECT * FROM main.users"
# PostgreSQL catalog
cargo run --example basic_query --features metadata-postgres -- \
"postgresql://user:password@localhost:5432/database" "SELECT * FROM main.users"
# MySQL catalog
cargo run --example basic_query --features metadata-mysql -- \
"mysql://user:password@localhost:3306/database" "SELECT * FROM main.users"
# SQLite catalog
cargo run --example basic_query --features metadata-sqlite -- \
"sqlite:///path/to/catalog.db" "SELECT * FROM main.users"
use datafusion::execution::runtime_env::RuntimeEnv;
use datafusion::prelude::*;
use datafusion_ducklake::{DuckLakeCatalog, DuckdbMetadataProvider};
use std::sync::Arc;
// Create metadata provider
let provider = DuckdbMetadataProvider::new("catalog.db")?;
// Create runtime (register object stores if using S3/MinIO)
let runtime = Arc::new(RuntimeEnv::default());
// Example: Register S3/MinIO object store
let s3: Arc<dyn ObjectStore> = Arc::new(
AmazonS3Builder::new()
.with_endpoint("http://localhost:9000") // Your MinIO endpoint
.with_bucket_name("ducklake-data") // Your bucket name
.with_access_key_id("minioadmin") // Your credentials
.with_secret_access_key("minioadmin") // Your credentials
.with_region("us-west-2") // Any region works for MinIO
.with_allow_http(true) // Required for http:// endpoints
.build()?,
);
runtime.register_object_store(&Url::parse("s3://ducklake-data/")?, s3);
// Create DuckLake catalog
let catalog = DuckLakeCatalog::new(provider)?;
// Create session and register catalog
let ctx = SessionContext::new_with_config_rt(
SessionConfig::new().with_default_catalog_and_schema("ducklake", "main"),
runtime
);
ctx.register_catalog("ducklake", Arc::new(catalog));
// Query
let df = ctx.sql("SELECT * FROM ducklake.main.my_table").await?;
df.show().await?;
This project is evolving alongside DataFusion and DuckLake. APIs may change as core abstractions are refined.
Feedback, issues, and contributions are welcome.