kenv

Crates.iokenv
lib.rskenv
version0.4.3
sourcesrc
created_at2021-08-23 09:59:33.11072
updated_at2021-08-23 09:59:33.11072
descriptionLocal Kubernetes Environment with KinD
homepagehttps://gitlab.com/roku-labs/kenv
repositoryhttps://gitlab.com/roku-labs/kenv
max_upload_size
id441086
size516,986
Roman Kuznetsov (kuznero)

documentation

README

KEnv - Local Kubernetes Environment with KinD

Kenv allows to spin up local Kubernetes cluster(s) (thanks to KinD) and install pre-configured applications for faster development/testing experiments (helm charts organized into namespaces/releases and kustomize based applications). It can also be used to run smoke tests in CI/CD pipelines.

Prerequisites

Kenv uses kubectl, kind, helm and also docker (docker engine can be running elsewhere if necessary, but docker cli should be available). Kenv can rely on the tools that are already present in the system, or you can choose to run kenv tools update to install very latest versions such independently of what is there already installed in the system.

Installation

When all the prerequisites are in place, the only thing it takes to get started is to install Kenv. This can be done either by cloning this repository and running cargo install, or simply by downloading statically linked binary available on the releases page.

Repository structure

But before jumping to spinning up and tearing down local Kubernetes clusters with Kenv let's explore the structure of an application repository. The folder structure has only two folders in its root - charts and kustomize. charts folder lists folders each one of which represents a kubernetes namespace, under each namespace there can be found folders with helm charts (each name represents helm release). kustomize folder is a lots simpler - it consists of a list of folders each one of which is a distincts application that Kenv will try to install. Briefly, the structure looks like this:

.
├── charts/                    # helm-based pre-configured applications
│   ├── system/                # ├── namespace (installed by default)
│   │   └── .../               # │   └── chart folder with a name used as a release name
│   ├── monitoring/            # ├── namespace (optional)
│   │   └── .../               # │   └── chart folder with a name used as a release name
│   └── ...                    # └── ... other optional namespaces
└── kustomize/                 # kustomize-based pre-configured applications
    └── ...                    # └── ... applications

All the pre-configured application can be found under charts/ (for Helm-based applications) and under kustomize/ (for Kustomize-based applications). Under charts/ you will find folders that correspond to namespaces that will be created when you will install its applications.

There is one special pre-configure applications that are getting installed unconditionally in all the clusters - under system namespace. This namespace should contain workloads that are typically considered to be prerequisites for many other services: Certificate Manager with its CRDs, Nginx Ingress Controller, etc.

Other optional namespaces under charts/ are there to provide a way to construct developer/tester environments according to the specific needs. These optional namespaces can be switched on when needed.

An example repository can be found and inspected under example_repo/.

Spinning up new Kubernetes cluster

The main way to control local kind clusters is through kenv cluster up command. Let's explore kenv cluster up --help a bit more as it is the most important tool in this project:

kenv-clusters-up
Starts up Kubernetes cluster

USAGE:
    kenv clusters up [FLAGS] [OPTIONS]

FLAGS:
    -c, --enable-calico-cni        Enables Calico CNI
    -i, --enable-image-registry    Enables container image registry
    -h, --help                     Prints help information

OPTIONS:
    -n, --cluster-name <name>                                The name of the cluster to spin up [default: local]
    -v, --version <version>                                  The kubernetes version [default: latest]
    -w, --workers <workers>                                  The number of worker nodes [default: 0]
        --image-registry-port <image-registry-port>
            The port where container image registry is exposed [default: 5000]

        --image-registry-ui-port <image-registry-ui-port>
            The port where container image registry UI is exposed [default: 5001]

    -p, --expose-port <expose-port>...                       The port to expose with format host_port:container_port
    -m, --mount-volume <mount-volume>...                     The volume mount with format host_path:container_path
        --custom-registry <custom-registry>                  Custom container registry fot kindest/node images

The kenv cluster up command is designed in such a way that allows it to spin up multiple co-existing local Kubernetes clusters. Let's consider some of the examples.

Starting a cluster with default parameters

Run the following command:

kenv cluster up

Output should look like this:

############################################################################################
# >>>                                                      Starting up [local] cluster <<< #
############################################################################################
#  >> Starting up registry                            | skip
#  >> Connect registry to kind network                | skip
#  >> Starting up registry UI                         | skip
#  >> Starting up cluster                             |
#     Ensuring node image (kindest/node:v1.21.2)      | ✓
#     Preparing nodes                                 | ✓
#     Writing configuration                           | ✓
#     Starting control-plane                          | ✓
#     Installing CNI                                  | ✓
#     Installing StorageClass                         | ✓
#  >> Starting all pods                              done
#-----------------------------------------------------+------------------------------------#

Note, that it is perfectly OK to re-run kenv cluster up multiple times:

############################################################################################
# >>>                                                      Starting up [local] cluster <<< #
############################################################################################
#  >> Starting up registry                            | skip
#  >> Connect registry to kind network                | skip
#  >> Starting up registry UI                         | skip
#  >> Starting up cluster                             | already exists
#-----------------------------------------------------+------------------------------------#

Here is how this new local cluster looks like (local is the default name of a cluster if it was not set explicitly to something else):

# ports exposed to localhost (7080 and 7443)
$ docker ps | grep local | sed 's/   /\n/g' | grep 6443
127.0.0.1:35435->6443/tcp

# new "kind-local" context is registered
$ kubectl config get-contexts | grep local
*         kind-local   kind-local   kind-local

# there is only one node powering "local" cluster
kubectl --context kind-local get nodes
NAME                  STATUS   ROLES                  AGE     VERSION
local-control-plane   Ready    control-plane,master   3m31s   v1.21.2

# the following pods are running
$ kubectl --context kind-local get pods -A
NAMESPACE            NAME                                          READY   STATUS    RESTARTS   AGE
kube-system          coredns-558bd4d5db-b27xf                      1/1     Running   0          3m26s
kube-system          coredns-558bd4d5db-z747f                      1/1     Running   0          3m26s
kube-system          etcd-local-control-plane                      1/1     Running   0          3m43s
kube-system          kindnet-5rjsl                                 1/1     Running   0          3m26s
kube-system          kube-apiserver-local-control-plane            1/1     Running   0          3m42s
kube-system          kube-controller-manager-local-control-plane   1/1     Running   0          3m42s
kube-system          kube-proxy-5q6ch                              1/1     Running   0          3m26s
kube-system          kube-scheduler-local-control-plane            1/1     Running   0          3m41s
local-path-storage   local-path-provisioner-85494db59d-hxdff       1/1     Running   0          3m26s

Starting a cluster with custom parameters

Run the following command:

kenv cluster up --cluster-name wide \
  --workers 3 \
  --enable-calico-cni \
  --enable-image-registry \
  -p 7080:31080 -p 7443:31443 \
  -m $(pwd)/_data:/data

This time it takes longer to complete, and its output should look like this:

############################################################################################
# >>>                                                       Starting up [wide] cluster <<< #
############################################################################################
#  >> Starting up registry                            | done
#  >> Connect registry to kind network                | done
#  >> Starting up registry UI                         | done
#  >> Starting up cluster                             |
#     Ensuring node image (kindest/node:v1.21.2)      | ✓
#     Preparing nodes                                 | ✓
#     Writing configuration                           | ✓
#     Starting control-plane                          | ✓
#     Installing StorageClass                         | ✓
#     Joining worker nodes                            | ✓
#  >> Installing Calico CNI
#     Release "calico" does not exist. Installing it now.
#     W0801 22:03:10.656968   64675 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
#     W0801 22:03:10.762030   64675 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
#     NAME: calico
#     LAST DEPLOYED: Sun Aug  1 22:03:10 2021
#     NAMESPACE: default
#     STATUS: deployed
#     REVISION: 1
#     TEST SUITE: None
#  >> Starting Calico CNI                            done
#  >> Starting all pods                              done
#-----------------------------------------------------+------------------------------------#

As was mentioned earlier, it is very well possible to run multiple local Kubernetes clusters. This run started up a cluster with the following properties:

  • Cluster name is wide
  • Cluster is constructed with 1 master node and 3 worker nodes (instead of single master node in local cluster)
  • Calico CNI is used instead of default Kindnet CNI (useful when you need to test network policies locally, etc.)
  • Ports 7080 and 7443 will be exposing node ports 31080 and 31443
  • Extra volume mount $(pwd)/_data will be available /data inside of the wide cluster
  • Image registry was setup as a docker container together with its UI (hosted by default on localhost:5000 and localhost:5001 respectively)

Using Calico CNI instead of built-in Kindnet CNI might be handy when you need to implement and test some network policies. And using image registry available on your host as well as trusted inside Kubernetes cluster might allow you to setup CI/CD pipelines.

Here is how this new wide cluster looks like:

# ports exposed to localhost (8080 and 8443) - not clashing with 7080 and 7443 for "local" cluster
$ docker ps | grep wide | sed 's/   /\n/g' | grep 6443
127.0.0.1:39675->6443/tcp, 0.0.0.0:7080->31080/tcp, 0.0.0.0:7443->31443/tcp

# new "kind-wide" context is registered
$ kubectl config get-contexts | grep wide
*         kind-wide    kind-wide    kind-wide

# there are more nodes powering "wide" cluster
$ kubectl --context kind-wide get nodes
NAME                 STATUS   ROLES                  AGE     VERSION
wide-control-plane   Ready    control-plane,master   7m49s   v1.21.2
wide-worker          Ready    <none>                 7m12s   v1.21.2
wide-worker2         Ready    <none>                 7m12s   v1.21.2
wide-worker3         Ready    <none>                 7m12s   v1.21.2

# the following pods are running
$ kubectl --context kind-wide get pods -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
calico-system        calico-kube-controllers-6bf8c44b7b-qsjgl     1/1     Running   0          7m7s
calico-system        calico-node-2vmws                            1/1     Running   0          7m7s
calico-system        calico-node-pkk4h                            1/1     Running   0          7m7s
calico-system        calico-node-wnsnt                            1/1     Running   0          7m7s
calico-system        calico-node-z5p5s                            1/1     Running   0          7m7s
calico-system        calico-typha-6d5c659854-54kmk                1/1     Running   0          7m
calico-system        calico-typha-6d5c659854-lmkrp                1/1     Running   0          7m7s
calico-system        calico-typha-6d5c659854-swmqq                1/1     Running   0          7m
kube-system          coredns-558bd4d5db-94774                     1/1     Running   0          7m49s
kube-system          coredns-558bd4d5db-lfw9l                     1/1     Running   0          7m49s
kube-system          etcd-wide-control-plane                      1/1     Running   0          8m4s
kube-system          kube-apiserver-wide-control-plane            1/1     Running   0          8m4s
kube-system          kube-controller-manager-wide-control-plane   1/1     Running   0          8m5s
kube-system          kube-proxy-5cst5                             1/1     Running   0          7m31s
kube-system          kube-proxy-bddjr                             1/1     Running   0          7m31s
kube-system          kube-proxy-f5trr                             1/1     Running   0          7m49s
kube-system          kube-proxy-jbt9f                             1/1     Running   0          7m31s
kube-system          kube-scheduler-wide-control-plane            1/1     Running   0          8m5s
local-path-storage   local-path-provisioner-85494db59d-gs98q      1/1     Running   0          7m49s
tigera-operator      tigera-operator-9c5c8797c-479zq              1/1     Running   0          7m28s

Installing applications

In order to install some pre-configured applications from an example application repository, it is possible to do it like this:

$ kenv apps rollout --path $(pwd)/example_repo --extra-namespace monioting --skip-if-already-installed

############################################################################################
# >>>                                      Rolling out applications to [local] cluster <<< #
############################################################################################
#  >> Validating cluster                              | valid
#  >> Validating repository                           | valid
#  >> Rolling out [system] releases                   |
#   1 cert-manager                                    | ...
#     Release "cert-manager" does not exist. Installing it now.
#     NAME: cert-manager
#     LAST DEPLOYED: Sun Aug  1 22:13:44 2021
#     NAMESPACE: system
#     STATUS: deployed
#     REVISION: 1
#     TEST SUITE: None
#   2 ingress-controller                              | ...
#     Release "ingress-controller" does not exist. Installing it now.
#     NAME: ingress-controller
#     LAST DEPLOYED: Sun Aug  1 22:14:24 2021
#     NAMESPACE: system
#     STATUS: deployed
#     REVISION: 1
#     TEST SUITE: None
#  >> Rolling out [monioting] releases                | skipped
#-----------------------------------------------------+------------------------------------#

Can you guess which cluster it got installed into? local or wide? :D The right answer is local as it is the default cluster name.

There is a special flag --skip-if-installed (or -s as its short version) that was passed to ensure that existing releases will not be processed more than once. This allows to iterate faster when you are only interested in a specific namespaces.

If you will run the same command again, it will run quickly:

$ kenv apps rollout --path $(pwd)/example_repo --extra-namespace monioting --skip-if-already-installed

############################################################################################
# >>>                                      Rolling out applications to [local] cluster <<< #
############################################################################################
#  >> Validating cluster                              | valid
#  >> Validating repository                           | valid
#  >> Rolling out [system] releases                   |
#   1 cert-manager                                    | exists
#   2 ingress-controller                              | exists
#  >> Rolling out [monioting] releases                | skipped
#-----------------------------------------------------+------------------------------------#

Listing applications

To list applications (Helm releases) installed across all the namespaces, just run the following:

$ kenv apps list

############################################################################################
# >>>                                          Listing applications in [local] cluster <<< #
############################################################################################
#  >> default                                         | ...
#  >> kube-node-lease                                 | ...
#  >> kube-public                                     | ...
#  >> kube-system                                     | ...
#  >> local-path-storage                              | ...
#  >> system                                          | ...
#   1 cert-manager                                    | deployed (rev: 1)
#   2 ingress-controller                              | deployed (rev: 1)
#-----------------------------------------------------+------------------------------------#

It is also possible to specify another cluster or a specific namespace if necessary. For more information, refer to kenv apps list --help.

Stopping clusters

Stopping clusters is as trivial as starting them, just make sure you specify correct cluster name:

$ kenv cluster down -n local

############################################################################################
# >>>                                                    Shutting down [local] cluster <<< #
############################################################################################
#  >> Shutting down cluster                           | done
#  >> Shutting down registry UI                       | already down
#  >> Shutting down registry                          | already down
#-----------------------------------------------------+------------------------------------#

$ kenv cluster down -n wide

############################################################################################
# >>>                                                     Shutting down [wide] cluster <<< #
############################################################################################
#  >> Shutting down cluster                           | done
#  >> Shutting down registry UI                       | done
#  >> Shutting down registry                          | done
#-----------------------------------------------------+------------------------------------#
Commit count: 30

cargo fmt