| Crates.io | brichka |
| lib.rs | brichka |
| version | 0.2.0 |
| created_at | 2026-01-25 17:21:33.147575+00 |
| updated_at | 2026-01-25 20:48:31.8174+00 |
| description | Cli tools for databricks |
| homepage | https://github.com/nikolaiser/brichka |
| repository | https://github.com/nikolaiser/brichka |
| max_upload_size | |
| id | 2069117 |
| size | 120,181 |
A lightweight CLI for running code on Databricks clusters with notebook-like execution contexts and Unity Catalog autocomplete.
Works standalone or with the Neovim plugin.
Prerequisites:
Try it out:
nix shell github:nikolaiser/brichka
Add to your flake inputs:
{
inputs = {
brichka = {
url = "github:nikolaiser/brichka";
inputs.nixpkgs.follows = "nixpkgs";
};
};
outputs = { self, nixpkgs, brichka, ... }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
modules = [{
...
environment.systemPackages = [ inputs.brichka.packages."${system}".brichka ];
...
}];
};
};
}
cargo install brichka
brew install nikolaiser/tap/brichka
brichka [OPTIONS] <COMMAND>
Commands:
cluster Cluster commands
config Config commands
init Initialize a new execution context in the current working directory
status Status commands
run Run code on the interactive cluster
lsp Start LSP server for Unity Catalog completion
help Print this message or the help of the given subcommand(s)
Options:
--cwd <CWD> Override the current working dirrectory
--debug Print debug logs
-h, --help Print help
# For current directory only
brichka config cluster
# Or globally
brichka config --global cluster
# Inline
brichka run --language "sql" "select * from foo.bar.bazz"
# From file/stdin
cat script.sc | brichka run --language "scala" -
Results are returned as JSONL in a temporary file:
{"type":"table","path":"/tmp/b909c39f1a934c1eb76595601a413bcc.jsonl"}
View results with any tool that reads JSONL (e.g., visidata, jq, etc.)
Create a shared context where commands can reference each other's output, like notebook cells:
brichka init
Now all commands in this directory share state:
-- First command
create or replace temporary view _data as select * from catalog.schema.table
// Second scala command can access _data
display(spark.table("_data"))
Get autocomplete for catalog/schema/table names in any editor.
Add ~/.config/nvim/lsp/brichka.lua:
---@type vim.lsp.Config
return {
cmd = { "brichka", "lsp" },
filetypes = { "sql", "scala" },
}
Then enable in your config:
vim.lsp.enable("brichka")
For Metals (Scala LSP) support, use .sc files with this template:
// Adjust to your target scala version
//> using scala 2.13
// Adjust to your target spark version
//> using dep org.apache.spark::spark-sql::3.5.7
// brichka: exclude
import org.apache.spark.sql.functions._
import org.apache.spark.sql.{DataFrame, SparkSession}
val spark: SparkSession = ???
def display(df: DataFrame): Unit = ()
// brichka: include
// Your code here
The // brichka: exclude comments let you add dummy values for Databricks objects (like spark) that Metals needs but shouldn't be sent to the cluster.
For multiple files in notebook mode, subsequent files should reference the first:
// brichka: include
//> using file fst.sc
// brichka: exclude
import fst._
import org.apache.spark.sql.functions._
// brichka: include