Crates.io | kubernix |
lib.rs | kubernix |
version | 0.1.1 |
source | src |
created_at | 2019-10-03 16:47:47.126722 |
updated_at | 2019-10-03 17:24:03.554446 |
description | Kubernetes development cluster bootstrapping with Nix packages |
homepage | https://github.com/saschagrunert/kubernix |
repository | https://github.com/saschagrunert/kubernix |
max_upload_size | |
id | 169661 |
size | 210,286 |
This project aims to provide you single dependency, single node Kubernetes clusters for local testing, experimenting and development purposes.
Do you ever heard from Nix, the functional package manager? Don't worry if not, all you need to know is that it provides all the third party dependencies for this project, pinned to a dedicated and reproducible version.
KuberNix itself is the Rusty helper program, which takes care of bootstrapping the Kubernetes cluster, passing the right configuration parameters around and keeping track of the running processes.
The following technology stack is currently being used:
Application | Purpose | Version |
---|---|---|
Kubernetes | Cluster Orchestration | v1.15.4 |
CRI-O | Container Runtime | v1.15.2 |
runc | Container Runtime | v1.0.0-rc8 |
cri-tools | CRI Manipulation Tool | v1.15.0 |
CNI Plugins | Container Networking | v0.8.2 |
etcd | Database Backend | v3.3.13 |
CoreDNS | Kubernetes DNS Support | v1.6.4 |
Some other tools are not explicitly mentioned here, like CFSSL for the certificate generation.
As already mentioned, there is only one single dependency needed to run this project: Nix. To setup Nix, simply run:
$ curl https://nixos.org/nix/install | sh
Please make sure to follow the instructions output by the script.
To bootstrap your first cluster, download one of the latest release binaries or build the application via:
$ make build-release
The binary should now be available in the target/release/kubernix
directory of
the project.
After the successful binary retrieval, start KuberNix by running it as root
:
$ sudo kubernix
KuberNix will now take care that the Nix environment gets correctly setup,
downloads the needed binaries and starts the cluster. Per default it will create
a directory called kubernix
in the current path which contains all necessary
data for the cluster.
If everything went fine, you should be dropped into a new bash-shell session, like this:
[INFO kubernix] Everything is up and running
[INFO kubernix] Spawning interactive shell
[INFO kubernix] Please be aware that the cluster gets destroyed if you exit the shell
>
Now you can access your cluster via tools like kubectl
:
> kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-85d84dd694-xz997 1/1 Running 0 102s
The log files for the different running components are now available within the current working directory, too:
> ls -1
crio.log
etcd.log
kube-apiserver.log
kube-controller-manager.log
kubelet.log
kube-proxy.log
kube-scheduler.log
If you want to spawn an additional shell session, simply run kubernix shell
in
the same directory as the initial bootstrap.
$ sudo kubernix shell
[INFO kubernix] Spawning new kubernix shell in 'kubernix-run'
> kubectl run --generator=run-pod/v1 --image=alpine -it alpine sh
If you don't see a command prompt, try pressing enter.
/ #
This means that you can spawn as many shells as you want to.
The whole cluster gets automatically destroyed if you exit the bash session from the initial process:
> exit
[INFO kubernix] Cleaning up
Please note that the directory where all the data is stored is not being removed after the exit of KuberNix. This means that you're still able to access the log and configuration files for further processing.
KuberNix has some configuration possibilities, which are currently:
CLI argument | Description | Default |
---|---|---|
-r, --root |
Path where all the runtime data is stored | kubernix-run |
-l, --log-level |
Logging verbosity | info |
-c, --crio-cidr |
CIDR used for the CRI-O CNI network | 10.100.0.0/16 |
-u, --cluster-cidr |
CIDR used for the whole cluster network | 10.200.0.0/16 |
-s, --service-cidr |
CIDR used for the service network | 10.50.0.0/24 |
Please ensure that the CIDRs are not overlapping with existing local networks and that your setup has access to the internet.
You want to contribute to this project? Wow, thanks! So please just fork it and send me a pull request.