| Crates.io | regioncam |
| lib.rs | regioncam |
| version | 0.5.2 |
| created_at | 2025-09-19 10:12:52.280148+00 |
| updated_at | 2025-09-19 11:03:48.097976+00 |
| description | Visualize linear regions in neural networks |
| homepage | |
| repository | https://github.com/twanvl/regioncam/ |
| max_upload_size | |
| id | 1846229 |
| size | 1,883,795 |
Regioncam is a rust library and python package for visualizing linear regions in a neural network. Regioncam works by tracking the output of all neural network layers in the regions where these outputs are linear. The inputs are in a 1 or 2 dimensional space.
use std::fs::File;
use rand::prelude::*;
use regioncam::{NNBuilder, Regioncam, RenderOptions, nn::Linear};
fn main() -> std::io::Result<()> {
// Create a regioncam object, with the region [-1..1]^2
let mut rc = Regioncam::square(1.0);
// Apply a linear layer
let mut rng = SmallRng::seed_from_u64(42);
let layer1 = Linear::new_uniform(2, 30, &mut rng);
rc.add(&layer1);
// Apply a relu activation function
rc.relu();
// Write to an svg file
let render_options = RenderOptions::default();
let mut file = File::create("example.svg")?;
rc.write_svg(&render_options, &mut file)?;
// Inspect regions
println!("Created {} regions", rc.num_faces());
println!("Face with the most edges has {} edges",
rc.faces().map(|face| rc.vertices_on_face(face).count()).max().unwrap()
);
Ok(())
}
Produces the output
Created 169 regions
Face with the most edges has 7 edges
And creates the following svg file:
Since this is a randomly initialized neural network, the linear regions are also placed randomly.
The python wrapper is intended to be the main way to use Regioncam, see regioncam-python for the details.
Regioncam is similar to Splinecam, but the algorithm is different.
Regioncam maintains a halfedge datastructure of linear regions, which is updated when a piecewise activation function is applied. It also stores the activations $x^{(l)}$ for every vertex on every layer. The activations for faces are stored as a $\mathbb{R}^{3\times D}$ matrix $F$, where the activation of an input point $u$ in that face is given by $x^{(l)} = f(u) = (u_1, u_2, 1) F$.
A ReLU activation is applied one dimension at a time:
For max pooling activations:
When rendering images, the color of each face is based on a hash of the activation pattern. This means that the colors remain stable during training.