| Crates.io | lamina |
| lib.rs | lamina |
| version | 0.0.5 |
| created_at | 2025-09-04 10:39:26.958886+00 |
| updated_at | 2025-09-24 01:57:28.632568+00 |
| description | High-performance compiler backend for Lamina Intermediate Representation |
| homepage | |
| repository | https://github.com/SkuldNorniern/lamina |
| max_upload_size | |
| id | 1824020 |
| size | 2,862,122 |
High-Performance Compiler Backend
Lamina is a compiler infrastructure for the Lamina Intermediate Representation (IR), a statically-typed, SSA-based language designed to bridge the gap between high-level languages and low-level backends like LLVM and Cranelift.
Lamina IR serves as a mid-level intermediate representation with the following design principles:
Unlike many IR systems that rely on external backends like LLVM or Cranelift, Lamina directly generates machine code for multiple target architectures. This approach offers:
Basic Arithmetic: All arithmetic operations work correctly
Control Flow: Conditionals, loops, and branching
Function Calls: Non-recursive function calls and returns
Memory Operations: Stack and heap allocations
Print Statements: Correct printf integration for both x86_64 and AArch64
Performance: Competitive with systems languages in benchmarks
Lamina provides a powerful IRBuilder API that makes it straightforward to build compilers for other languages. A typical compiler using Lamina would:
The IRBuilder API provides methods for:
Here's a simple example of creating a basic arithmetic function using the Lamina IRBuilder:
use lamina::ir::*;
fn create_add_function() -> Result<Module, Error> {
// Create a new module
let mut builder = IRBuilder::new();
let module = builder.create_module("math_example");
// Define function signature: fn add(a: i32, b: i32) -> i32
let params = vec![
FunctionParameter { ty: Type::I32, name: "a".to_string() },
FunctionParameter { ty: Type::I32, name: "b".to_string() },
];
// Create the function
let function = builder.create_function("add", params, Type::I32);
// Create entry block
let entry_block = builder.create_block("entry");
builder.set_current_block(entry_block);
// Get function parameters
let param_a = builder.get_parameter(0);
let param_b = builder.get_parameter(1);
// Add the two parameters
let result = builder.add_instruction(
Instruction::BinaryOp {
op: BinaryOperator::Add,
ty: Type::I32,
left: param_a,
right: param_b,
}
);
// Return the result
builder.add_return(result);
Ok(module)
}
// Usage example:
fn main() {
let module = create_add_function().unwrap();
let compiled = module.compile_to_assembly().unwrap();
println!("Generated assembly:\n{}", compiled);
}
This generates the equivalent Lamina IR:
fn @add(i32 %a, i32 %b) -> i32 {
entry:
%result = add.i32 %a, %b
ret.i32 %result
}
Lamina demonstrates competitive performance in computational tasks. The following results are from our comprehensive 256×256 2D matrix multiplication benchmark (500 runs with statistical analysis):
| Language | Time (s) | Performance Ratio | Memory (MB) | Memory Ratio |
|---|---|---|---|---|
| Lamina | 0.0372 | 1.00x (baseline) | 1.38 | 1.00x |
| Zig | 0.0021 | 0.06x | 0.50 | 0.36x |
| C | 0.0098 | 0.26x | 1.50 | 1.09x |
| C++ | 0.0101 | 0.27x | 3.49 | 2.54x |
| Go | 0.0134 | 0.36x | 1.60 | 1.16x |
| Nim | 0.0134 | 0.36x | 1.50 | 1.09x |
| Rust | 0.0176 | 0.47x | 1.91 | 1.39x |
| C# | 0.0333 | 0.90x | 30.39 | 22.10x |
| Java | 0.0431 | 1.16x | 42.93 | 31.22x |
| PHP | 0.5720 | 15.37x | 20.50 | 14.91x |
| Ruby | 1.4744 | 39.63x | 23.25 | 16.91x |
| Python | 2.2585 | 60.70x | 12.38 | 9.00x |
| JavaScript | 2.7995 | 75.24x | 53.20 | 38.69x |
Notes:
Rust 1.70+ (2024 edition)
Clang/GCC for linking generated assembly
macOS, Linux, or Windows
x86_64_linux: Linux x86_64
x86_64_macos: macOS x86_64
aarch64_macos: macOS ARM64 (Apple Silicon)
aarch64_linux: Linux ARM64
Lamina IR files consist of type declarations, global definitions, and function declarations:
# Type declaration
type @Vec2 = struct { x: f32, y: f32 }
# Global value
global @message: [5 x i8] = "hello"
# Function with annotations
@export
fn @add(i32 %a, i32 %b) -> i32 {
entry:
%sum = add.i32 %a, %b
ret.i32 %sum
}
# Function with control flow
fn @conditional(i32 %x) -> i32 {
entry:
%is_pos = gt.i32 %x, 0
br %is_pos, positive, negative
positive:
ret.i32 1
negative:
ret.i32 -1
}
# Stack allocation
%ptr = alloc.ptr.stack i32
# Heap allocation
%hptr = alloc.ptr.heap i32
# Load and store
store.i32 %ptr, %val
%loaded = load.i32 %ptr
# Optional heap deallocation
dealloc.heap %hptr
# Struct field access
%field_ptr = getfield.ptr %struct, 0
%value = load.f32 %field_ptr
# Array element access
%elem_ptr = getelem.ptr %array, %index
%value = load.i32 %elem_ptr
# Matrix multiplication simulation
fn @compute_cell(i64 %row, i64 %col, i64 %size, i64 %matrix_value) -> i64 {
entry:
%row_plus_1 = add.i64 %row, 1
%col_plus_1 = add.i64 %col, 1
%factor1 = mul.i64 %matrix_value, %size
%factor2 = mul.i64 %row_plus_1, %col_plus_1
%result = mul.i64 %factor1, %factor2
ret.i64 %result
}
@export
fn @main() -> i64 {
entry:
%matrix_size = add.i64 262144, 0
%matrix_value = add.i64 2, 0
print %matrix_size
%_ = call @simulate_matmul(%matrix_size, %matrix_value)
ret.i64 0
}
Enhanced Optimizations: Loop analysis, auto-vectorization, and register allocation
Additional Architectures: RISC-V and other ARM variants
Language Integration: C, Rust, and custom language frontends
JIT Compilation: Dynamic code execution support
Debugging Tools: Enhanced debugging information and tooling
Performance Improvements: Advanced optimization passes
GPU Acceleration: support for generating code targeting GPU architectures (e.g., CUDA, Vulkan compute shaders) to enable parallel execution of compute-intensive workloads
Please feel free to submit pull requests or open issues for bugs and feature requests.
Lamina - A Modern Compiler Backend
Built with Rust | Designed for Performance | Open Source