This is an old revision of the document!
Table of Contents
Step 5 - Application Pods
We are going to deploy a demonstration web app using an image downloaded from hub.docker.com. We will demonstrate using HAProxy in front of the web application on Kubernetes.
Overview:
- Nginx and PHP-FPM on Alpine Linux minimalist image from https://github.com/TrafeX/docker-php-nginx
- This image has the mysqli drive enabled but the PDO driver for mysql is disabled
- PDO is the modern way to interfaces with of the various databases out there with minimal coding changes
- We created our own image with PDO support: https://github.com/doritoes/docker-php-nginx-app-server
- This image installs php83-pdo_mysql and enables the pdo_mysql extension
- Our simple demonstration app is build on the base drive without PDO support, and will use mysqli
- This application does not use HTTPS. It is HTTP only.
- Obviously don't use this in production as leaks credentials!
- HOWEVER, may web applications are rolled these days without https because they are only accessible by a load balancer or reverse proxy that has the TLS certificate installed. The security is performed by the load balancer meaning the actual pods don't have the https overhead and complexity.
References:
Create the Deployment
We are doing to create 2 pods and have then created on our “worker” nodes. Our demonstration app includes both liveness and readiness check URLs.
- k8s-deployment-web.yml
apiVersion: apps/v1 kind: Deployment metadata: name: web-deployment spec: replicas: 2 selector: matchLabels: app: web template: metadata: labels: app: web spec: nodeSelector: my-role: worker # restrict scheduling to the nodes with the label my-role: worker containers: - name: web image: doritoes/k8s-php-demo-app ports: - containerPort: 8080 livenessProbe: httpGet: path: /liveness.php port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: # Add the readinessProbe section httpGet: path: /readiness.php port: 8080 initialDelaySeconds: 5 periodSeconds: 5
Run with kubectl apply -f k8s-deployment-web.yml
Test the deployment
kubectl get pods,deploy
kubectl describe pods
- there should be 2 pods on node2 and node3 (not node1)
kubectl exec -it <podname> – sh
- Test web page from host:
kubectl port-forward <podname> 8080:8080
Create the Service
Now we are going to expose the application to beyond the node.
- k8s-service-web.yml
apiVersion: v1 kind: Service metadata: name: web-service spec: selector: app: web type: NodePort ports: - protocol: TCP port: 80 targetPort: 8080 nodePort: 30080
Run kubectl apply -f k8s-service-web.yml
You can now text access from the host system (or any device on the same network). Point your browser to the IP address of any worker node on the nodePort
we set to 30080
Point browser to
http://<ipaddress_node>:30080
Clean up the deployment and the service:
kubectl delete -f k8s-deployment-web.yml
kubectl delete -f k8s-service-web.yml
Create Ansible Playbook to Deploy the Web App
Create Ansible playbooks to create and remove the web servers.
- deploy-web.yml
--- - name: Deploy Nginx with PHP-FPM hosts: localhost connection: local tasks: - name: Create Deployment kubernetes.core.k8s: state: present definition: "{{ lookup('file', 'k8s-deployment-web.yml') }}" namespace: default - name: Create Service kubernetes.core.k8s: state: present definition: "{{ lookup('file', 'k8s-service-web.yml') }}" namespace: default
- destroy-web.yml
--- - name: Destroy Nginx with PHP-FPM hosts: localhost connection: local tasks: - name: Remove Deployment kubernetes.core.k8s: state: absent definition: "{{ lookup('file', 'k8s-deployment-web.yml') }}" namespace: default - name: Remove Service kubernetes.core.k8s: state: absent definition: "{{ lookup('file', 'k8s-service-web.yml') }}" namespace: default
Test them:
ansible-playbook deploy-web.yml
kubectl get pods
ansible-playbook destroy-web.yml
kubectl get pods
ansible-playbook deploy-web.yml
Test the Application
Point browser to
http://<ipaddress_node>:30080
Create an account and log in to see the very limited capabilities of the app.
Configure HA Proxy
We will demonstrate using HAProxy in front of the web application on Kubernetes. This is handy because it allows the nodes expose the application while load balancing the connections across all the pods.
IMPORTANT If you do this, remember you will need to reconfigure haproxy.cnf and gracefully reload haproxy for every addition/removal of pods.
BETTER SOLUTION is to automate HAProxy reconfiguration in response to pod changes
- k8s sidecar container
- monitors for changes to pods matching your backend label; could even use the utility kubewatch
- template updates - update the template on the fly with new or removed pod IPs
- mount your config as a ConfigMap, and have a sidecar modify it in place
- HAProxy Reload - gracefully reload HAProxy after template modification
- /path/to/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
- avoids interrupting active connections
Create the haproxy.cfg.j2 Template
This basic configuration will load balance (round-robin method) across the “worker” nodes on port 8080.
- You can access via any node
- The node will pass the traffic to a pod on another node if need be
- haproxy.cfg.j2
# ------------------------ # main frontend which proxies to the backends # ------------------------ frontend main bind *:8080 timeout client 30s default_backend app # ------------------------ # round robin balancing between the various backends # ------------------------ backend app balance roundrobin timeout connect 5s timeout server 30s {% for ip in worker_ips %} server app{{ loop.index}} {{ ip }}:8080 check {% endfor %}
Apply HAProxy
- haproxy-install.yml
--- - hosts: workers become: true tasks: - name: Install haproxy package: name: haproxy state: present - name: "Fetch Pod IPs for web deployment" delegate_to: localhost become: false run_once: true shell: kubectl get pods -l app=web -o jsonpath="{.items[*].status.podIP}" register: pod_ips - name: "Store Pod IPs" set_fact: worker_ips: "{{ pod_ips.stdout | split(' ') }}" - name: Configure haproxy.cfg file template: src: "haproxy.cfg.j2" dest: "/etc/haproxy/haproxy.cfg" - name: "haproxy service start" service: name: haproxy state: restarted
Run the playbook:
ansible-playbook haproxy-install.yml --ask-become
Enter the password for the user ansible
when prompted.
Test HAProxy
Point browser to
http://<ipaddress_node>:8080
Try both using the IP address of all worker nodes.
Now edit the file k8s-deployment-web.yml
to set replicas: 1 then apply it
ansible-playbook apply k8s-deployment-web.yml
Confirm that the number of web pods has reduced from 2 to 1, and which node it is running on.
kubectl get pods kubectl describe pods
Repeat the test of accessing the application on each node IP address.
The application works on both! In practice you can set your application DNS to round-robin to point to one, two, or more worker nodes.
Let's restore the number of replicas to 2 and re-apply k8s-deployment-web.yml
. Confirm that you again have two pods distributed across the nodes.
ssh to the node that lost the pod and a new one was created and check.
sudo journalctl -xeu haproxy.service
Uh oh! The pod was created with a new IP address that doesn't match the haproxy.cfg file we pushed out.
Since this is a lab and we can handle the interruption of a regular restart:
ansible-playbook haproxy-install.yml --ask-become
systemctl status haproxy sudo journalctl -xeu haproxy.service
Optionally Remove the NodePort
We no longer need to configure the NodePort to have external access to the application.
The service manifest can be reduced:
- k8s-service-web.yml
apiVersion: v1 kind: Service metadata: name: web-service spec: selector: app: web ports: - protocol: TCP port: 80 targetPort: 8080
However, you will need to update the haproxy.cfg files and restart haproxy if you need using it.
Scaling
You can quickly scale up or down the number of replicas in your deployment.
kubectl scale deployment/web-deployment --replicas=10
kubectl scale deployment/web-deployment --replicas=2
Try scaling to 10, then watch what happens when the deployment file is reapplied:
ansible-playbook deploy-web.yml
Scaling isn't “permanent” if you don't update the yml file (manifest).
Next Step
Continue to Step 6 - Application Configuration
Or, back to Step 4 - MySQL Server or Start
Optional
Load Test
These pods are only rated for around sessions each. But let's test that out!
https://stackify.com/best-way-to-load-test-a-web-server/
Install autobench on your host system:
sudo apt install apache2-utils
Here is the basic syntax: ab -n <number_of_request> -c <concurrency> <url>
If you have your HProxy up and running so that your load page is at
http://<nodeip>:8080/load.php
ab -n 100 -c 10 http://<nodeip>:8080/load.php
Compare:
- 10000 connections, 10 current
- compare load.php vs index.php vs test.html
- 50000 connections, 100 current
- the times in ms are maybe 10x higher, but if you login in to the application and use it while the test is running, do you notice any difference?
- if you increase to 10 replicas and reconfigure haproxy, will the performance get better or worse?
Confirm Liveness Tests are Working
In one terminal session set up a watch of your pods and deployment status.
watch -d -n 1 'kubectl get pods,deploy'
In another terminal session list your web pods, open an interactive shell.
kubectl get pods kubectl exec -it <podname> -- sh </sh> Remove the liveness.php file <code>rm liveness.php
Watch how long it takes for the pod to be restart. Examine how the ready counters, status, restarts, and age are affected.
What happens if you remove the readiness.php file? What happens if you run the command “reboot”?
What If...?
What will happen if your MySQL pod hangs? How will the application pods behave?
kubectl get pods kubectl delete pod <name of mysql pod>
ansible-playbook destroy-sql.yml # watch the pods restart 5 times; they never become ready # in the meantime, can you access the application? what happens when you try to log in? # if you have the nodeConfig in, or rerun ansible-playbook haproxy-install.yml # after 5 restarts notice the status is CrashLoopBackOff ansible-playbook deploy-sql.yml # do the pods recover automatically?