Hit the ground running with an all in one kubernetes cluster on your bare metal machine which you can further extend as your journey progresses
We will be setting up a complete all in one kubernetes cluster on a bare metal server in a virtual machine.
The kubeadm setup tool is an automated way of setting up a test ready kubernetes master and adding additional nodes to the cluster.
By starting this way we build a complete Kubernetes cluster setup without hassles. This also helps us to understand the different components needed to serve various functions inside the cluster.
- CentOS 7+
- >4GB RAM for later activities
- >100 GB Disk Space for later activities
- >2 CPU Cores
- Internet connectivity
- Firewalld disabled
- self resolving hostname (/etc/hosts)
- sudo user
- Disable SeLinux
- Disable Swap
Firewalld has to be disabled, as it will interfere with the firewall rules from kubeadm
Step 1 – Preparing the machine
To get through our installation smoothly we are going to prepare all the software and configuration needed.
Docker will be the container engine of our choice managed by Kubernetes. Install, enable and start the docker service.
# yum install docker –y # systemctl start docker # systemctl enable docker
kubectl is the command line client we will be using to connect and manage our Kubernetes cluster.
Download the binary.
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
Make binary file executable.
$ chmod +x ./kubectl
Move binary file into files path location.
$ sudo mv ./kubectl /usr/local/bin/kubectl
Check that the file path is working by executing
kubectl. The help page will appear.
$ kubectl kubectl controls the Kubernetes cluster manager. Find more information at https://github.com/kubernetes/kubernetes. Basic Commands (Beginner): create Create a resource from a file or from stdin. expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service run Run a particular image on the cluster set Set specific features on objects run-container Run a particular image on the cluster. This command is deprecated, use "run" instead ...
Let’s add the Kubernetes repository from google to get the newest Kubernetes release.
[kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Now we can install the kubeadm install tool and the kubelet component. Kubelet will be used by the kubeadm install tool to run kube-apiserver, kube-controller-manager and kube-scheduler in kubernetes pods.
# yum install kubelet kubeadm -y
Start and enable kubelet.
# systemctl enable kubelet # systemctl start kubelet
The foundation consisiting of docker, kubectl, kubelet and kubeadm for our all in one Kubernetes Cluster has been prepared and we can move over to working with the
kubeadm setup tool.
Step 2 – Setup an all in one cluster with kubeadm
In this section we will go through how to install our Kubernetes cluster with the kubeadm setup tool. This may sound easy but it does have it’s caveats.
Initialize the Kubernetes master by specifying the virtual network segment the Kubernetes nodes will be getting their IP’s assigned from by using the
--pod-network-cidr option. As there can be a lot of nodes later in our network we specify a large network segment.
# kubeadm init --pod-network-cidr=10.244.0.0/16
The output from the kubectl command will give you instructions on configuring
kubectl command line client to connect to your cluster and the command to add nodes to the master. Note down the join command and progress with the
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 4a1230.fef1dc12223c6ab9 192.168.0.217:6443 --discovery-token-ca-cert-hash sha256:d9d4473a880af182b9dc67be4ed220ebf55e0564fd934a1760d5a50a24f9e349
kubectl command line client in your user space.
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Let’s check if everything is setup correctly by looking for a running kube-dns pod in the kube-system namespace. At the same time we are checking if
kubectl is able to reach our kubernetes cluster.
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE etcd-powerodit.localdomain 1/1 Running 0 5m kube-apiserver-powerodit.localdomain 1/1 Running 0 5m kube-controller-manager-powerodit.localdomain 1/1 Running 0 5m kube-dns-545bc4bfd4-czxvf 0/3 Pending 0 6m kube-proxy-svkj4 1/1 Running 0 6m kube-scheduler-powerodit.localdomain 1/1 Running 0 5m
Setup an isolated virtual network where our pods and nodes can communicate with each other. We will be using flannel deployed as a pod inside Kubernetes as it is the most common virtual network setup used.
Apply the prepared flannel installation resource from CoreOS. This resource will do the following:
- ClusterRole and ClusterRoleBinading for role based acccess control (RBAC).
- Service account for flannel to use.
- ConfigMap containing both a CNI configuration and a flannel configuration. The network in the flannel configuration must match the
--pod-network-cidrargument given to kubeadm. The choice of backend is also made here and defaults to VXLAN.
- DaemonSet to deploy the flannel pod on each Node. The pod has two containers:
- The flannel daemon itself, and
- An initContainer for deploying the CNI configuration to a location that the kubelet can read.
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Let’s check if flannel is running, by looking at the pod and DaemonSets in the kube-system namespace.
$ kubectl get pods,ds -n kube-system NAME READY STATUS RESTARTS AGE po/etcd-powerodit.localdomain 1/1 Running 0 18h po/kube-apiserver-powerodit.localdomain 1/1 Running 0 18h po/kube-controller-manager-powerodit.localdomain 1/1 Running 0 18h po/kube-dns-545bc4bfd4-czxvf 3/3 Running 0 18h po/kube-flannel-ds-qxddn 1/1 Running 0 17h po/kube-proxy-svkj4 1/1 Running 0 18h po/kube-scheduler-powerodit.localdomain 1/1 Running 0 18h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ds/kube-flannel-ds 1 1 1 1 1 beta.kubernetes.io/arch=amd64 17h ds/kube-proxy 1 1 1 1 1 <none> 18h
Kubeadm doesn’t allow running Master and Node on the same machine. As we are setting up an all in one cluster on a single bare metal machine we will have to disable this feature by “tainting” the master.
$ kubectl taint nodes --all node-role.kubernetes.io/master- node "powerodit.localdomain" untainted
Now we can use the noted down join command which we got at the end of the
kubadm initialization to join a node to the master component.
# kubeadm join --token <token> <master-ip>:<master-port>
We successfully setup our Kubernetes Master and Node component and integrated a virtual network with flannel so our Kubernetes pods and nodes can communicate with each other in an isolated environment. Being the commendable engineers we are let’s do a last check to be sure everything is working fine.
Step 3 – Check successful installation
To see if everything is working fine we will be setting up a simple busybox pod and do a DNS lookup. This will test for us if a pod can be spawned and if the virtual network is working properly.
Define a busybox pod.
apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox restartPolicy: Always
Spawn pod definition in Kubernetes.
$ kubectl create -f default-busybox.yml
Check if the pod is running by listing all pods in the default namespace.
$ kubectl get pods NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 49s
Execute a dns lookup for the busybox pod inside the busybox pod.
- You will see that busybox successfully got an IP assigned from the
10.244.0.0/16network defined in kubeadm init
- Busybox resolves successfully which tells us that the DNS cluster service is working
$ kubectl exec -ti busybox -- nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: busybox Address 1: 10.244.0.3 busybox
We now have a minimal running all in one Kubernetes cluster on a bare metal machine. The great thing about this setup ist the flexibility you get, for example, you can add new nodes for HA capabilities or upgrade the cluster with new services (add-on’s).