PowerodIT
Kubernetes Patterns: Stateless Applications

Kubernetes Patterns: Stateless Applications

Get hands on experience by deploying a typical back end/ front end sample application

This guide will show you how to set up a basic web application with database connectivity. In this case we will set up a PHP/Redis application, which on deletion will completely vanish, including the saved data, hence a stateless application.

Kubernetes PHP guestbook infrastructure overview

Prerequisites

  • Running Kubernetes cluster
  • Internet connection
  • Kubectl connection

The Kubernetes cluster in this guide will be based on All in One Kubernetes Cluster with kubeadm.

Step 1 – Prepare Deployment Resources

We will prepare all files needed for later deployment of the different components.

  • frontend-deployment.yaml
  • frontend-service.yaml
  • redis-master-deployment.yaml
  • redis-master-service.yaml
  • redis-slave-deployment.yaml
  • redis-slave-service.yaml

Create a directory where we will store all Kubernetes resources.

$ mkdir guestbook-example
$ cd guestbook-example

Download all the example resource files for the guestbook example.

$ wget https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/frontend-deployment.yaml
$ wget https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/frontend-service.yaml
$ wget https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/redis-master-deployment.yaml
$ wget https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/redis-master-service.yaml
$ wget https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/redis-slave-deployment.yaml
$ wget https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/redis-slave-service.yaml

And that’s it. We are now ready to start with the deployment of the different Kubernetes resources.

Step 2 – Deploy Backend Resources

Kubernetes PHP guestbook backend overview

Let’s set up a data store where our guestbook application can read and write it’s data to.

Create a deployment with one pod for the redis-master database by using the redis-master-deployment.yaml file.

$ kubectl create –f redis-master-deployment.yaml
redis-master-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-master
spec:
replicas: 1
template:
   metadata:
     labels:
       app: redis
       role: master
       tier: backend
   spec:
     containers:
     - name: master
       image: gcr.io/google_containers/redis:e2e # or just image: redis
       resources:
         requests:
           cpu: 100m
           memory: 100Mi
       ports:
       - containerPort: 6379

Check if the redis-master pod exists by using the pods or po key.

$ kubectl get pods
NAME                           READY     STATUS     RESTARTS   AGE
redis-master-57cc594f67-zxk7b   1/1         Running   0         57s

Let’s check that the application in the pod is running without errors, by checking the logs for this pod.

$ kubectl logs redis-master-57cc594f67-zxk7b
               _._                                                   
           _.-``__ ''-._                                             
     _.-``     `. `_. ''-._           Redis 2.8.19 (00000000/0) 64 bit
 .-`` .-```.   ```\/   _.,_ ''-._                                   
 (     '     ,       .-`   | `,   )     Running in stand alone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |     `-._   `._   /       _.-'   |     PID: 1
 `-._     `-._ `-./ _.-'     _.-'                                     
 |`-._`-._     `-.__.-'   _.-'_.-'|                                 
 |     `-._`-._       _.-'_.-'   |           http://redis.io       
 `-._     `-._`-.__.-'_.-'   _.-'                                  
 |`-._`-._     `-.__.-'   _.-'_.-'|                                 
 |     `-._`-._       _.-'_.-'   |                                 
 `-._     `-._`-.__.-'_.-'   _.-'                                   
     `-._     `-.__.-'   _.-'                                       
         `-._       _.-'                                           
             `-.__.-'                                                 

[1] 11 Oct 10:31:26.852 # Server started, Redis version 2.8.19
[1] 11 Oct 10:31:26.852 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
[1] 11 Oct 10:31:26.852 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
[1] 11 Oct 10:31:26.852 * The server is now ready to accept connections on port 6379

Let’s make the pod reachable inside our Kubernetes cluster by defining a service policy to our pod. The service will know which pods to direct the connections to by matching the selector section in the service resource to the label section in the pod resource.

$ kubectl create -f redis-master-service.yaml
redis-master-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
   app: redis
   role: master
   tier: backend
spec:
ports:
- port: 6379
   targetPort: 6379
selector:
   app: redis
   role: master
   tier: backend

Check if the redis service has been created by using the service or svc key.

$ kubectl get svc
NAME           TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
kubernetes     ClusterIP   10.96.0.1       <none>       443/TCP   1d
redis-master   ClusterIP     10.97.242.250     <none>      6379/TCP     8m

You can check if the service registered a pod by using the describe key on the service and looking at the endpoints section.

$ kubectl describe svc redis-master
...
Endpoints:         10.244.0.5:6379
...

If you compare the IP to the IP of he pod you will see they are the same.

$ kubectl describe pod redis-master-57cc594f67-zxk7b
...
IP:             10.244.0.5
...

Let’s add 2 redis slave replicas to make our data store highly available. The slave pods will be used for read operations.

$ kubectl create -f redis-slave-deployment.yaml
redis-slave-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-slave
spec:
replicas: 2
template:
   metadata:
     labels:
       app: redis
       role: slave
       tier: backend
   spec:
     containers:
     - name: slave
       image: gcr.io/google_samples/gb-redisslave:v1
       resources:
         requests:
           cpu: 100m
           memory: 100Mi
       env:
       - name: GET_HOSTS_FROM
         value: dns
         # If your cluster config does not include a dns service, then to
         # instead access an environment variable to find the master
         # service's host, comment out the 'value: dns' line above, and
         # uncomment the line below:
         # value: env
       ports:
       - containerPort: 6379

We see 2 new redis-slave pods, when listing our running pods.

$ kubectl get pods
NAME                           READY     STATUS     RESTARTS   AGE
redis-master-57cc594f67-zxk7b   1/1         Running   0         28m
redis-slave-84845b8fd8-c4jhd   1/1         Running   0         1m
redis-slave-84845b8fd8-dbmhn   1/1         Running   0         1m

If we check the pod logs in both instances we can see that the slave connected successfully to the redis-master pod by going through the redis-master service.

$ kubectl logs redis-slave-84845b8fd8-dbmhn

               _._                                                  
           _.-``__ ''-._                                            
     _.-``   `.   `_. ''-._           Redis 3.0.3 (00000000/0) 64 bit
.-`` .-```. ```\/     _.,_ ''-._                                  
(   '       ,       .-` | `,     )     Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
|   `-._     `._   /     _.-'     |     PID: 6
`-._   `-._   `-./ _.-'   _.-'                                  
|`-._`-._   `-.__.-'   _.-'_.-'|                                
|   `-._`-._       _.-'_.-'   |           http://redis.io      
`-._   `-._`-.__.-'_.-'   _.-'                                  
|`-._`-._   `-.__.-'   _.-'_.-'|                                
|   `-._`-._       _.-'_.-'   |                                
`-._   `-._`-.__.-'_.-'   _.-'                                  
     `-._   `-.__.-'   _.-'                                      
         `-._       _.-'                                          
             `-.__.-'                                              

6:S 11 Oct 10:57:44.302 # Server started, Redis version 3.0.3
6:S 11 Oct 10:57:44.303 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
6:S 11 Oct 10:57:44.303 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
6:S 11 Oct 10:57:44.303 * The server is now ready to accept connections on port 6379
6:S 11 Oct 10:57:44.303 * Connecting to MASTER redis-master:6379
6:S 11 Oct 10:57:44.394 * MASTER <-> SLAVE sync started
6:S 11 Oct 10:57:44.394 * Non blocking connect for SYNC fired the event.
6:S 11 Oct 10:57:44.394 * Master replied to PING, replication can continue...
6:S 11 Oct 10:57:44.394 * Partial resynchronization not possible (no cached master)
6:S 11 Oct 10:57:44.395 * Full resync from master: 0917f55d5e1341ae828086cbdd9672f82657f98d:1
6:S 11 Oct 10:57:44.472 * MASTER <-> SLAVE sync: receiving 18 bytes from master
6:S 11 Oct 10:57:44.472 * MASTER <-> SLAVE sync: Flushing old data
6:S 11 Oct 10:57:44.472 * MASTER <-> SLAVE sync: Loading DB in memory
6:S 11 Oct 10:57:44.472 * MASTER <-> SLAVE sync: Finished with success

Now let’s make the slave service reachable inside our kubernetes cluster by attaching the pods to a service.

$ kubectl create -f redis-slave-service.yaml
redis-slave-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-slave
labels:
   app: redis
   role: slave
   tier: backend
spec:
ports:
- port: 6379
selector:
   app: redis
   role: slave
   tier: backend

Let’s check again if the service has been created successfully.

$ kubectl get svc
NAME           TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
kubernetes     ClusterIP   10.96.0.1      <none>       443/TCP   2d
redis-master   ClusterIP     10.97.242.250     <none>       6379/TCP   30m
redis-slave   ClusterIP   10.104.58.25   <none>       6379/TCP   2m

If we look now which pods have connected to our redis-slave service we can see our two slave replica container IP’s.

$ kubectl describe svc/redis-slave
...
Endpoints:           10.244.0.6:6379,10.244.0.7:6379
...
$ kubectl describe pod redis-slave-84845b8fd8-dbmhn
...
IP:             10.244.0.6
...
$ kubectl describe pod redis-slave-84845b8fd8-c4jhd
...
IP:             10.244.0.7
...

Great we got ourselves a highly available data store with a master and slave redis database. Let’s see how we deploy a front end application that makes use of the back end we set up.

Step 3 – Deploy Front End Resources

Kubernetes PHP guestbook frontend overview

Now we get to the colorful part. We will be deploying a PHP guestbook front end that is already configured to connect to our redis-master service for writing operations and to our redis-slave service for reading operations. This is done by using the cluster internal DNS name resolution.

Spawn 3 replicas of the PHP guestbook front end by running the prepared Kubernetes file resource.

kubectl create -f frontend-deployment.yaml
frontend-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
     labels:
       app: guestbook
       tier: frontend
    spec:
     containers:
     - name: php-redis
       image: gcr.io/google-samples/gb-frontend:v4
       resources:
         requests:
           cpu: 100m
           memory: 100Mi
       env:
       - name: GET_HOSTS_FROM
         value: dns
         # If your cluster config does not include a dns service, then to
         # instead access environment variables to find service host
         # info, comment out the 'value: dns' line above, and uncomment the
         # line below:
         # value: env
       ports:
       - containerPort: 80

Now let’s check for the spawned front end pods. Instead of trying to visually identify our pods in a long list, let’s just display pods identified by their label.

$ kubectl get pods -l app=guestbook -l tier=frontend
NAME                         READY     STATUS   RESTARTS     AGE
frontend-685d7ff496-2267v     1/1       Running   0           3m
frontend-685d7ff496-6lb2w     1/1       Running   0           3m
frontend-685d7ff496-899vl     1/1       Running   0           3m

The frontend service is at the moment only reachable by it’s dynamic pod IP inside the cluster. To make it reachable outside our cluster we will again create a service linking to the pod, but this time the service will be externally reachable by its node IP address by using the NodePort type, because of our setup.

$ kubectl create -f frontend-service.yaml
frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # comment or delete the following line if you want to use a LoadBalancer
  type: NodePort
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Let’s check for the service by using the label selector.

$ kubectl get svc –l app=guestbook –l tier=frontend
NAME       TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)       AGE
frontend   NodePort   10.108.26.128   <none>       80:32145/TCP   3m

As this example is based on my guide All in One Kubernetes Cluster with kubeadm we will have to get the IP address of our machine running the cluster and the external facing port of the front end service.

Type the IP address and port in your browser and you will reach a simple interactive guestbook. When you submit text, the guestbook saves your query by contacting redis-master and when displaying what you saved, the guestbook contacts the redis-slave inside your Kubernetes master.

PHP guestbook application saved entries

Step 4 – Testing Data Store Resilience

We set up a high available data store, now we want to see how it holds up when we take down it’s pods.

Go to your guestbook application in your browser and type some entries to save to the database.

Now let’s delete the redis master pod. When you reload your browser you will still see your guest book entries. The same happens when deleting a slave pod.

$ kubectl delete pod redis-master-57cc594f67-68bcr
$ kubectl delete pod redis-master-84845b8fd8-8bwrl

Scale down the redis slave deployment to 0 replicas so no redis slaves can be reached.

$ kubectl scale --replicas 0 deploy redis-slave

Your data still exists in the master but your guestbook app accesses the data through the redis-slave service which has no pods to redirect the request to.

Scale up the redis slave deployment to it’s original 2 replicas. After reloading your page you will see your data again.

$ kubectl scale --replicas 2 deploy redis-slave

Do the same for the redis master deployment this time. Now you won’t be able to add entries as the app directs write operations through the redis master service policy.

$ kubectl scale --replicas 0 deploy redis-master
$ kubectl scale --replicas 1 deploy redis-master

Let’s delete all redis pods by using the label selector.

$ kubectl delete pods -l app=redis

When we reload the page all our data have vanished.

Conclusion

You have now learned the basics of deploying a back end/ front end service by setting up a basic master/slave database and a front end php application. All pods can communicate with each other by using the service policies you setup and you can even reach the front end by in our case using the NodePort type.

You also got to test the database resiliency by scaling and deleting pods.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: