Deploy an application in a Kubernetes cluster on Linux on IBM Z

In this post, we are going to install kubernetes and deploy a docker image (in this case NGINX) in a Kubernetes cluster on IBM Z. Before to start, I’d like to point that the installation is not something special.  You could also follow other Kubernetes tutorials. That is the beauty to use Open Source projects and Linux on IBM Z. It does not require deep Mainframe skills.  I’ll particularly point out if some steps are Z specific.

This post assumes that you are familiar with Kubernetes concepts. If not, a great place where to start is the official page.

Let’s start. First of all, you need at least 3 systems (LPAR, z/VM or KVM). One system is used as master and the others as worker.

My system information:

$ cat /etc/os-release 
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.3 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
$ uname -rm
4.4.0-138-generic s390x

Installation

For the installation, we need 3 binaries kubectl (only on the master), kubeadm and kubelet on all the nodes. The installation of the kubernets binaries on Ubuntu, Debian, Red Hat and Fedora is reported in the Kubernetes installation page. In this case, I’m installing on an Ubuntu system:

$ sudo apt-get update && sudo apt-get install -y apt-transport-https
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update
$ sudo apt-get install -y kubectl kubeadm kubelet

Docker needs to be installed on your system. You can find various methods in the Docker official documentation. However, for Linux on IBM Z they offer the installation methods only for Ubuntu. An alternative solution could be to install the Docker static binaries. For this example, I simply installed Docker from the package manager:

$ apt-get install docker.io
$ docker version
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.6.2
Git commit: f5ec1e2
Built: Thu Jul 5 23:05:46 2018
OS/Arch: linux/s390x

Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.6.2
Git commit: f5ec1e2
Built: Thu Jul 5 23:05:46 2018
OS/Arch: linux/s390x
Experimental: false

You also need to disable the swap memory:

$ sudo swapoff -a

Cluster initialization

Now, we are ready to initialize the cluster using kubeadm. On the master node:

$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16

init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[...]
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 172.18.26.12:6443 --token 4i6549.1dvsn065phcwz0s0 --discovery-token-ca-cert-hash sha256:e15ab64c82263c1018124a2439a6c385fcba1da6b8062548ae7f5dce50f3755e

Don’t forget to copy the .config as suggested by the initialization

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Otherwise you could experience an error like this:

$ kubectl version
[....]
The connection to the server localhost:8080 was refused - did you specify the right host or port?

It only occurs because kubectl doesn’t point correctly to the configuration. After I correctly set the configuration. I am able to query for the cluster version.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/s390x"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:43:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/s390x"}

 

Set network plugin

Networking in Kubernetes is a very complex topic because it includes the node communication, container communication and communication between containers and nodes. For more details see the network Kubernetes documentation. There are various solutions that implement the Kubernetes networking model. For this example, flannel has been chosen. At the time I wrote this post, it was the only network plugin available for IBM Z on Docker Hub. Other plugins can be used but they need to be built from source code if the image is not provided.

We apply the network plugin flannel:

$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
$ sed -i 's/amd64/s390x/g' kube-flannel.yml 
$ kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-s390x created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x unchanged

Verify that all the Kubernetes pods are successfully running

$ kubectl get pods --all-namespaces=true 
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-576cbf47c7-h6wc5 1/1 Running 0 21m
kube-system coredns-576cbf47c7-tl9vd 1/1 Running 0 21m
kube-system etcd-p2330012 1/1 Running 0 21m
kube-system kube-apiserver-p2330012 1/1 Running 0 20m
kube-system kube-controller-manager-p2330012 1/1 Running 0 20m
kube-system kube-flannel-ds-s390x-dht96 1/1 Running 0 2m29s
kube-system kube-proxy-zvszk 1/1 Running 0 21m
kube-system kube-scheduler-p2330012 1/1 Running 0 20m

 

Register worker nodes

As reported in the output of  kubeadm. We can now register the worker node using the token generated during the initialization. On the worker node:

kubeadm join 172.18.26.12:6443 --token 4i6549.1dvsn065phcwz0s0 --discovery-token-ca-cert-hash sha256:e15ab64c82263c1018124a2439a6c385fcba1da6b8062548ae7f5dce50f3755e
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

[discovery] Trying to connect to API Server "172.18.26.12:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.18.26.12:6443"
[discovery] Requesting info from "https://172.18.26.12:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.18.26.12:6443"
[discovery] Successfully established connection with API Server "172.18.26.12:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "p2330013" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

Verification on the master:

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
p2330012 Ready master 25m v1.12.2
p2330013 Ready <none> 67s v1.12.2

In the same way you can add as many worker as you desire.

Deployment of an application

We are ready to deploy an application. I follow this example with a small modification. The only change consists in the image version for the update. I am going to update version 1.14 to version 1.15 because those two versions of the nginx image are already multi-arch. You need to run the commands from the master by using kubectl:

nginx.yaml

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.0
        ports:
        - containerPort: 80

You can now apply the yaml file and 2 replicas of the nginx image are deployed in the cluster.

$ kubectl apply -f nginx.yaml
deployment.apps/nginx-deployment created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-6fbc6fc87-9prqb 1/1 Running 0 26s
nginx-deployment-6fbc6fc87-trdlh 1/1 Running 0 26s

We can now update the image version from 1.14.0. to 1.15.0 in the nginx.yaml and apply the change:

$ kubectl apply -f nginx.yaml 
deployment.apps/nginx-deployment configured

$ kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
nginx-deployment-74c64b9554-npzpd 1/1 Running 0 11s
nginx-deployment-74c64b9554-wfspn 1/1 Running 0 29s

The changes have been applied and the containers immediately redeployed.

You can delete the deployment

$ kubectl delete -f nginx.yaml 
deployment.apps "nginx-deployment" deleted
# After a while
$ kubectl get pods -l app=nginx
No resources found.

We have installed and deployed an nginx application on a Kuberentes cluster on Linux on IBM Z. You can follow each kubernetes tutorial you want for more complex scenarios, you just need to check if the container images that you want to the deploy are multi_arch or s390x images.

Have fun to deploy your applications on the mainframe!

One thought on “Deploy an application in a Kubernetes cluster on Linux on IBM Z

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s