| Crates.io | lamina |
| lib.rs | lamina |
| version | 0.0.8 |
| created_at | 2025-09-04 10:39:26.958886+00 |
| updated_at | 2025-12-31 07:13:24.064525+00 |
| description | High-performance compiler backend for Lamina Intermediate Representation |
| homepage | |
| repository | https://github.com/SkuldNorniern/lamina |
| max_upload_size | |
| id | 1824020 |
| size | 3,936,411 |
High-Performance Compiler Backend
Lamina is a compiler backend for the Lamina Intermediate Representation (IR): a small, statically-typed SSA IR.
I built Lamina for a few reasons:
Lamina IR serves as a mid-level intermediate representation.
Design Principles:
Lamina uses a two-stage compilation pipeline:
Lamina IR → MIR (Machine IR) → Optimizations → Native Assembly
Lamina includes an optimization pipeline with configurable optimization levels:
The pipeline includes transforms for:
src/mir/transform/ for full list)Lamina directly generates machine code for multiple target architectures without relying on external backends.
Notes:
Basic Arithmetic: All arithmetic operations (add, sub, mul, div, rem, bitwise ops)
Control Flow: Conditionals, loops, branching, and phi nodes for SSA
Function Calls: Recursive and non-recursive function calls with proper ABI
Memory Operations: Stack and heap allocations, load/store operations
Type System: Primitives, arrays, structs, tuples, and pointers
I/O Operations: Print statements with printf integration for all supported architectures
Performance: Competitive with systems languages in benchmarks
Optimization Pipeline: Configurable optimization levels with multiple transform passes
The IRBuilder API makes it straightforward to build compilers for other languages. A typical compiler using Lamina would:
The IRBuilder API provides methods for:
Here's a simple example of creating a basic arithmetic function using the Lamina IRBuilder:
use lamina::ir::{IRBuilder, Type, PrimitiveType, BinaryOp};
use lamina::ir::builder::{var, i32};
// Create function: fn @add(i32 %a, i32 %b) -> i32
let mut builder = IRBuilder::new();
builder
.function_with_params("add", vec![
lamina::FunctionParameter {
name: "a",
ty: Type::Primitive(PrimitiveType::I32),
annotations: vec![],
},
lamina::FunctionParameter {
name: "b",
ty: Type::Primitive(PrimitiveType::I32),
annotations: vec![],
},
], Type::Primitive(PrimitiveType::I32))
// Add the two parameters
.binary(BinaryOp::Add, "sum", PrimitiveType::I32, var("a"), var("b"))
// Return the result
.ret(Type::Primitive(PrimitiveType::I32), var("sum"));
let module = builder.build();
// Compile to assembly
use std::io::Write;
let mut assembly = Vec::new();
lamina::compile_lamina_ir_to_target_assembly(
&format!("{}", module), // Convert module to IR text
&mut assembly,
"x86_64_linux"
)?;
println!("Generated assembly:\n{}", String::from_utf8(assembly)?);
This generates the equivalent Lamina IR:
fn @add(i32 %a, i32 %b) -> i32 {
entry:
%result = add.i32 %a, %b
ret.i32 %result
}
The following results are from our 256×256 2D matrix multiplication benchmark (500 runs):
| Language | Time (s) | Performance Ratio | Memory (MB) | Memory Ratio |
|---|---|---|---|---|
| Lamina | 0.0372 | 1.00x (baseline) | 1.38 | 1.00x |
| Zig | 0.0021 | 0.06x | 0.50 | 0.36x |
| C | 0.0098 | 0.26x | 1.50 | 1.09x |
| C++ | 0.0101 | 0.27x | 3.49 | 2.54x |
| Go | 0.0134 | 0.36x | 1.60 | 1.16x |
| Nim | 0.0134 | 0.36x | 1.50 | 1.09x |
| Rust | 0.0176 | 0.47x | 1.91 | 1.39x |
| C# | 0.0333 | 0.90x | 30.39 | 22.10x |
| Java | 0.0431 | 1.16x | 42.93 | 31.22x |
| PHP | 0.5720 | 15.37x | 20.50 | 14.91x |
| Ruby | 1.4744 | 39.63x | 23.25 | 16.91x |
| Python | 2.2585 | 60.70x | 12.38 | 9.00x |
| JavaScript | 2.7995 | 75.24x | 53.20 | 38.69x |
Notes:
Rust 1.89+ (2024 edition)
Clang/GCC for linking generated assembly
macOS, Linux, or Windows
x86_64 (Intel/AMD 64-bit)
x86_64_linux: Linux x86_64x86_64_macos: macOS x86_64 (Intel Macs)x86_64_windows: Windows x86_64x86_64_unknown: Generic x86_64 (ELF conventions)AArch64 (ARM 64-bit)
aarch64_macos: macOS ARM64 (Apple Silicon)aarch64_linux: Linux ARM64aarch64_windows: Windows ARM64aarch64_unknown: Generic AArch64 (ELF conventions)RISC-V
riscv32_unknown: RISC-V 32-bitriscv64_unknown: RISC-V 64-bitriscv128_unknown: RISC-V 128-bit (nightly feature)WebAssembly
wasm32_unknown: WebAssembly 32-bit
wasm64_unknown: WebAssembly 64-bit
Lamina IR files consist of type declarations, global definitions, and function declarations:
# Type declaration
type @Vec2 = struct { x: f32, y: f32 }
# Global value
global @message: [5 x i8] = "hello"
# Function with annotations
@export
fn @add(i32 %a, i32 %b) -> i32 {
entry:
%sum = add.i32 %a, %b
ret.i32 %sum
}
# Function with control flow
fn @conditional(i32 %x) -> i32 {
entry:
%is_pos = gt.i32 %x, 0
br %is_pos, positive, negative
positive:
ret.i32 1
negative:
ret.i32 -1
}
# Stack allocation
%ptr = alloc.ptr.stack i32
# Heap allocation
%hptr = alloc.ptr.heap i32
# Load and store
store.i32 %ptr, %val
%loaded = load.i32 %ptr
# Optional heap deallocation
dealloc.heap %hptr
# Struct field access
%field_ptr = getfield.ptr %struct, 0
%value = load.f32 %field_ptr
# Array element access
%elem_ptr = getelem.ptr %array, %index
%value = load.i32 %elem_ptr
Enhanced Optimizations: Complete optimization pipeline, loop analysis, auto-vectorization
Language Integration: C and Rust frontends for compiling to Lamina IR
JIT Compilation: Dynamic code execution engine
Debugging Tools: Enhanced debugging information, DWARF support, interactive debugger
GPU Acceleration: CUDA / Vulkan compute shader support
SIMD Support: Auto-vectorization and explicit SIMD types
Please feel free to submit pull requests or open issues for bugs and feature requests.
Lamina - A Modern Compiler Backend
Built with Rust | Designed for Performance | Open Source