lab:kubernetes_app:step_5_-_application_pods
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
lab:kubernetes_app:step_5_-_application_pods [2024/02/14 02:47] – [Load Test] user | lab:kubernetes_app:step_5_-_application_pods [2024/05/13 18:16] (current) – removed user | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Step 5 - Application Pods ====== | ||
- | We are going to deploy a demonstration web app using an image downloaded from hub.docker.com. We will demonstrate using HAProxy in front of the web application on Kubernetes. | ||
- | |||
- | Overview: | ||
- | * Nginx and PHP-FPM on Alpine Linux minimalist image from [[https:// | ||
- | * This image has the mysqli drive enabled but the PDO driver for mysql is disabled | ||
- | * PDO is the modern way to interfaces with of the various databases out there with minimal coding changes | ||
- | * // See [[https:// | ||
- | * We created our own image with PDO support: [[https:// | ||
- | * This image installs php83-pdo_mysql and enables the pdo_mysql extension | ||
- | * Our simple demonstration app is build on the base drive __without__ PDO support, and will use mysqli | ||
- | * Source:[[ https:// | ||
- | * Image: [[https:// | ||
- | * This application does not use HTTPS. It is HTTP __only__. | ||
- | * Obviously don't use this in production as leaks credentials! | ||
- | * HOWEVER, may web applications are rolled these days without https because they are only accessible by a load balancer or reverse proxy that has the TLS certificate installed. The security is performed by the load balancer meaning the actual pods don't have the https overhead and complexity. | ||
- | |||
- | References: | ||
- | * [[https:// | ||
- | |||
- | ====== Create the Deployment ====== | ||
- | We are doing to create 2 pods and have then created on our " | ||
- | |||
- | <file yaml k8s-deployment-web.yml> | ||
- | apiVersion: apps/v1 | ||
- | kind: Deployment | ||
- | metadata: | ||
- | name: web-deployment | ||
- | spec: | ||
- | replicas: 2 | ||
- | selector: | ||
- | matchLabels: | ||
- | app: web | ||
- | template: | ||
- | metadata: | ||
- | labels: | ||
- | app: web | ||
- | spec: | ||
- | nodeSelector: | ||
- | my-role: worker # restrict scheduling to the nodes with the label my-role: worker | ||
- | containers: | ||
- | - name: web | ||
- | image: doritoes/ | ||
- | ports: | ||
- | - containerPort: | ||
- | </ | ||
- | |||
- | Run with '' | ||
- | |||
- | Test the deployment | ||
- | * '' | ||
- | * '' | ||
- | * there should be 2 pods on node2 and node3 (not node1) | ||
- | * '' | ||
- | * Test web page from host: | ||
- | * '' | ||
- | * '' | ||
- | |||
- | ====== Create the Service ====== | ||
- | Now we are going to expose the application to beyond the node. | ||
- | |||
- | <file yaml k8s-service-web.yml> | ||
- | apiVersion: v1 | ||
- | kind: Service | ||
- | metadata: | ||
- | name: web-service | ||
- | spec: | ||
- | selector: | ||
- | app: web | ||
- | type: NodePort | ||
- | ports: | ||
- | - protocol: TCP | ||
- | port: 80 | ||
- | targetPort: 8080 | ||
- | nodePort: 30080 | ||
- | </ | ||
- | |||
- | Run '' | ||
- | |||
- | You can now text access from the host system (or any device on the same network). Point your browser to the IP address of any worker node on the '' | ||
- | |||
- | Point browser to < | ||
- | |||
- | Clean up the deployment and the service: | ||
- | * '' | ||
- | * '' | ||
- | |||
- | ====== Create Ansible Playbook to Deploy the Web App ====== | ||
- | Create Ansible playbooks to create and remove the web servers. | ||
- | |||
- | <file yaml deploy-web.yml> | ||
- | --- | ||
- | - name: Deploy Nginx with PHP-FPM | ||
- | hosts: localhost | ||
- | connection: local | ||
- | tasks: | ||
- | - name: Create Deployment | ||
- | kubernetes.core.k8s: | ||
- | state: present | ||
- | definition: "{{ lookup(' | ||
- | namespace: default | ||
- | - name: Create Service | ||
- | kubernetes.core.k8s: | ||
- | state: present | ||
- | definition: "{{ lookup(' | ||
- | namespace: default | ||
- | </ | ||
- | |||
- | <file yaml destroy-web.yml> | ||
- | --- | ||
- | - name: Destroy Nginx with PHP-FPM | ||
- | hosts: localhost | ||
- | connection: local | ||
- | tasks: | ||
- | - name: Remove Deployment | ||
- | kubernetes.core.k8s: | ||
- | state: absent | ||
- | definition: "{{ lookup(' | ||
- | namespace: default | ||
- | - name: Remove Service | ||
- | kubernetes.core.k8s: | ||
- | state: absent | ||
- | definition: "{{ lookup(' | ||
- | namespace: default | ||
- | </ | ||
- | |||
- | Test them: | ||
- | * '' | ||
- | * '' | ||
- | * '' | ||
- | * '' | ||
- | |||
- | |||
- | ====== Test the Application ====== | ||
- | Point browser to < | ||
- | |||
- | Create an account and log in to see the very limited capabilities of the app. | ||
- | |||
- | ====== Configure HA Proxy ====== | ||
- | We will demonstrate using HAProxy in front of the web application on Kubernetes. This is handy because it allows the nodes expose the application while load balancing the connections across all the pods. | ||
- | |||
- | IMPORTANT If you do this, remember you will need to reconfigure haproxy.cnf and gracefully reload haproxy for every addition/ | ||
- | |||
- | BETTER SOLUTION is to automate HAProxy reconfiguration in response to pod changes | ||
- | * k8s sidecar container | ||
- | * monitors for changes to pods matching your backend label; could even use the utility kubewatch | ||
- | * template updates - update the template on the fly with new or removed pod IPs | ||
- | * mount your config as a ConfigMap, and have a sidecar modify it in place | ||
- | * HAProxy Reload - gracefully reload HAProxy after template modification | ||
- | * / | ||
- | * avoids interrupting active connections | ||
- | |||
- | ===== Create the haproxy.vfg.j2 Template ===== | ||
- | This basic configuration will load balance (round-robin method) across the " | ||
- | * You can access via any node | ||
- | * The node will pass the traffic to a pod on another node if need be | ||
- | |||
- | <file yaml haproxy.cfg.j2> | ||
- | # ------------------------ | ||
- | # main frontend which proxies to the backends | ||
- | # ------------------------ | ||
- | frontend main | ||
- | bind *:8080 | ||
- | timeout client 30s | ||
- | default_backend app | ||
- | # ------------------------ | ||
- | # round robin balancing between the various backends | ||
- | # ------------------------ | ||
- | backend app | ||
- | balance roundrobin | ||
- | timeout connect 5s | ||
- | timeout server 30s | ||
- | {% for ip in worker_ips %} | ||
- | server app{{ loop.index}} {{ ip }}:8080 check | ||
- | {% endfor %} | ||
- | </ | ||
- | |||
- | ===== Apply HAProxy ===== | ||
- | <file yaml haproxy-install.yml> | ||
- | --- | ||
- | - hosts: workers | ||
- | become: true | ||
- | tasks: | ||
- | - name: Install haproxy | ||
- | package: | ||
- | name: haproxy | ||
- | state: present | ||
- | - name: "Fetch Pod IPs for web deployment" | ||
- | delegate_to: | ||
- | become: false | ||
- | run_once: true | ||
- | shell: kubectl get pods -l app=web -o jsonpath=" | ||
- | register: pod_ips | ||
- | - name: "Store Pod IPs" | ||
- | set_fact: | ||
- | worker_ips: "{{ pod_ips.stdout | split(' | ||
- | - name: Configure haproxy.cfg file | ||
- | template: | ||
- | src: " | ||
- | dest: "/ | ||
- | - name: " | ||
- | service: | ||
- | name: haproxy | ||
- | state: restarted | ||
- | </ | ||
- | |||
- | Run the playbook: | ||
- | < | ||
- | |||
- | ===== Test HAProxy ===== | ||
- | Point browser to < | ||
- | |||
- | Try both using the IP address of all worker nodes. | ||
- | |||
- | Now edit the file '' | ||
- | < | ||
- | |||
- | Confirm that the number of web pods has reduced from 2 to 1, and which node it is running on. | ||
- | < | ||
- | kubectl get pods | ||
- | kubectl describe pods | ||
- | </ | ||
- | |||
- | Repeat the test of accessing the application on each node IP address. | ||
- | |||
- | The application works on both! In practice you can set your application DNS to round-robin to point to one, two, or more worker nodes. | ||
- | |||
- | Let's restore the number of replicas to 2 and re-apply '' | ||
- | |||
- | ssh to the node that lost the pod and a new one was created and check. | ||
- | < | ||
- | |||
- | Uh oh! The pod was created with a new IP address that doesn' | ||
- | |||
- | Since this is a lab and we can handle the interruption of a regular restart: | ||
- | < | ||
- | |||
- | < | ||
- | systemctl status haproxy | ||
- | sudo journalctl -xeu haproxy.service | ||
- | </ | ||
- | |||
- | ===== Optionally Remove the NodePort ===== | ||
- | We no longer need to configure the NodePort to have external access to the application. | ||
- | |||
- | The service manifest can be reduced: | ||
- | <file yaml k8s-service-web.yml> | ||
- | apiVersion: v1 | ||
- | kind: Service | ||
- | metadata: | ||
- | name: web-service | ||
- | spec: | ||
- | selector: | ||
- | app: web | ||
- | ports: | ||
- | - protocol: TCP | ||
- | port: 80 | ||
- | targetPort: 8080 | ||
- | </ | ||
- | |||
- | However, you will need to update the haproxy.cfg files and restart haproxy if you need using it. | ||
- | |||
- | ====== Next Step ====== | ||
- | Continue to [[Step 6 - Application Configuration]] | ||
- | |||
- | Or, back to [[Step 4 - MySQL Server]] or [[Start]] | ||
- | |||
- | ====== Optional ====== | ||
- | |||
- | ===== Load Test ===== | ||
- | These pods are only rated for around sessions each. But let's test that out! | ||
- | |||
- | [[https:// | ||
- | |||
- | Install autobench on your host system: | ||
- | < | ||
- | |||
- | Here is the basic syntax: '' | ||
- | |||
- | If you have your HProxy up and running so that your load page is at | ||
- | < | ||
- | |||
- | < | ||
- | ab -n 100 -c 10 http://< | ||
- | </ | ||
- | |||
- | Compare: | ||
- | * 10000 connections, | ||
- | * compare load.php vs index.php vs test.html | ||
- | * 50000 connections, | ||
- | * the times in ms are maybe 10x higher, but if you login in to the application and use it while the test is running, do you notice any difference? | ||
- | * if you increase to 10 replicas and reconfigure haproxy, will the performance get better or worse? | ||
lab/kubernetes_app/step_5_-_application_pods.1707878852.txt.gz · Last modified: 2024/02/14 02:47 by user