UncleNUC Wiki

Second chance for NUCs

User Tools

Site Tools


lab:kubernetes_app:step_5_-_application_pods

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
lab:kubernetes_app:step_5_-_application_pods [2024/02/14 04:02] – update userlab:kubernetes_app:step_5_-_application_pods [2024/05/13 18:16] (current) – removed user
Line 1: Line 1:
-====== Step 5 - Application Pods ====== 
-We are going to deploy a demonstration web app using an image downloaded from hub.docker.com. We will demonstrate using HAProxy in front of the web application on Kubernetes. 
- 
-Overview: 
-  * Nginx and PHP-FPM on Alpine Linux minimalist image from [[https://github.com/TrafeX/docker-php-nginx]] 
-  * This image has the mysqli drive enabled but the PDO driver for mysql is disabled 
-    * PDO is the modern way to interfaces with of the various databases out there with minimal coding changes 
-    * // See [[https://www.w3schools.com/php/php_mysql_connect.asp]] // 
-  * We created our own image with PDO support: [[https://github.com/doritoes/docker-php-nginx-app-server]] 
-    * This image installs php83-pdo_mysql and enables the pdo_mysql extension 
-  * Our simple demonstration app is build on the base drive __without__ PDO support, and will use mysqli 
-    * Source:[[ https://github.com/doritoes/k8s-php-demo-app]] 
-    * Image: [[https://hub.docker.com/repository/docker/doritoes/k8s-php-demo-app]] 
-  * This application does not use HTTPS. It is HTTP __only__. 
-    * Obviously don't use this in production as leaks credentials! 
-    * HOWEVER, may web applications are rolled these days without https because they are only accessible by a load balancer or reverse proxy that has the TLS certificate installed. The security is performed by the load balancer meaning the actual pods don't have the https overhead and complexity. 
- 
-References: 
-  * [[https://28gauravkhore.medium.com/how-to-configure-the-haproxy-using-the-ansible-and-also-how-to-configure-haproxy-dynamically-f18a55de3a66]] 
- 
-====== Create the Deployment ====== 
-We are doing to create 2 pods and have then created on our "worker" nodes. Our demonstration app includes both liveness and readiness check URLs. 
- 
-<file yaml k8s-deployment-web.yml> 
-apiVersion: apps/v1 
-kind: Deployment 
-metadata: 
-  name: web-deployment 
-spec: 
-  replicas: 2 
-  selector: 
-    matchLabels: 
-      app: web 
-  template: 
-    metadata: 
-      labels: 
-        app: web 
-    spec: 
-      nodeSelector: 
-        my-role: worker # restrict scheduling to the nodes with the label my-role: worker 
-      containers: 
-      - name: web 
-        image: doritoes/k8s-php-demo-app 
-        ports: 
-        - containerPort: 8080 
-        livenessProbe: 
-          httpGet: 
-            path: /liveness.php 
-            port: 8080 
-          initialDelaySeconds: 30 
-          periodSeconds: 10 
-        readinessProbe: # Add the readinessProbe section 
-          httpGet: 
-            path: /readiness.php 
-            port: 8080 
-          initialDelaySeconds: 5 
-          periodSeconds: 5 
-</file> 
- 
-Run with ''kubectl apply -f k8s-deployment-web.yml'' 
- 
-Test the deployment 
-  * ''kubectl get pods,deploy'' 
-  * ''kubectl describe pods'' 
-    * there should be 2 pods on node2 and node3 (not node1) 
-  * ''kubectl exec -it <podname> -- sh'' 
-  * Test web page from host: 
-    * ''kubectl port-forward <podname> 8080:8080'' 
-    * ''http://localhost:8080'' 
- 
-====== Create the Service ====== 
-Now we are going to expose the application to beyond the node. 
- 
-<file yaml k8s-service-web.yml> 
-apiVersion: v1 
-kind: Service 
-metadata: 
-  name: web-service 
-spec: 
-  selector: 
-    app: web 
-  type: NodePort 
-  ports: 
-    - protocol: TCP 
-      port: 80 
-      targetPort: 8080 
-      nodePort: 30080 
-</file> 
- 
-Run ''kubectl apply -f k8s-service-web.yml'' 
- 
-You can now text access from the host system (or any device on the same network). Point your browser to the IP address of any worker node on the ''nodePort'' we set to 30080 
- 
-Point browser to <code>http://<ipaddress_node>:30080</code> 
- 
-Clean up the deployment and the service: 
-  * ''kubectl delete-f k8s-deployment-web.yml'' 
-  * ''kubectl delete -f k8s-service-web.yml'' 
- 
-====== Create Ansible Playbook to Deploy the Web App ====== 
-Create Ansible playbooks to create and remove the web servers. 
- 
-<file yaml deploy-web.yml> 
---- 
-- name: Deploy Nginx with PHP-FPM 
-  hosts: localhost 
-  connection: local 
-  tasks: 
-    - name: Create Deployment 
-      kubernetes.core.k8s: 
-        state: present 
-        definition: "{{ lookup('file', 'k8s-deployment-web.yml') }}" 
-        namespace: default 
-    - name: Create Service 
-      kubernetes.core.k8s: 
-        state: present 
-        definition: "{{ lookup('file', 'k8s-service-web.yml') }}" 
-        namespace: default 
-</file> 
- 
-<file yaml destroy-web.yml> 
---- 
-- name: Destroy Nginx with PHP-FPM 
-  hosts: localhost 
-  connection: local 
-  tasks: 
-    - name: Remove Deployment 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-deployment-web.yml') }}" 
-        namespace: default 
-    - name: Remove Service 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-service-web.yml') }}" 
-        namespace: default 
-</file> 
- 
-Test them: 
-  * ''ansible-playbook deploy-web.yml'' 
-  * ''kubectl get pods'' 
-  * ''ansible-playbook destroy-web.yml'' 
-  * ''kubectl get pods'' 
- 
- 
-====== Test the Application ====== 
-Point browser to <code>http://<ipaddress_node>:30080</code> 
- 
-Create an account and log in to see the very limited capabilities of the app. 
- 
-====== Configure HA Proxy ====== 
-We will demonstrate using HAProxy in front of the web application on Kubernetes. This is handy because it allows the nodes expose the application while load balancing the connections across all the pods. 
- 
-IMPORTANT If you do this, remember you will need to reconfigure haproxy.cnf and gracefully reload haproxy for every addition/removal of pods. 
- 
-BETTER SOLUTION is to automate HAProxy reconfiguration in response to pod changes 
-  * k8s sidecar container 
-    * monitors for changes to pods matching your backend label; could even use the utility kubewatch 
-  * template updates - update the template on the fly with new or removed pod IPs 
-    * mount your config as a ConfigMap, and have a sidecar modify it in place 
-  * HAProxy Reload - gracefully reload HAProxy after template modification 
-    * /path/to/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) 
-    * avoids interrupting active connections 
- 
-===== Create the haproxy.vfg.j2 Template ===== 
-This basic configuration will load balance (round-robin method) across the "worker" nodes on port 8080. 
-  * You can access via any node 
-  * The node will pass the traffic to a pod on another node if need be 
- 
-<file yaml haproxy.cfg.j2> 
-# ------------------------ 
-# main frontend which proxies to the backends 
-# ------------------------ 
-frontend main 
-    bind *:8080 
-    timeout client 30s  
-    default_backend app 
-# ------------------------ 
-# round robin balancing between the various backends 
-# ------------------------ 
-backend app 
-    balance roundrobin 
-    timeout connect 5s 
-    timeout server 30s 
-{% for ip in worker_ips %} 
-    server app{{ loop.index}} {{ ip }}:8080 check 
-{% endfor %} 
-</file> 
- 
-===== Apply HAProxy ===== 
-<file yaml haproxy-install.yml> 
---- 
-- hosts: workers 
-  become: true 
-  tasks: 
-    - name: Install haproxy 
-      package: 
-        name: haproxy 
-        state: present 
-    - name: "Fetch Pod IPs for web deployment" 
-      delegate_to: localhost 
-      become: false 
-      run_once: true 
-      shell: kubectl get pods -l app=web -o jsonpath="{.items[*].status.podIP}" 
-      register: pod_ips 
-    - name: "Store Pod IPs" 
-      set_fact: 
-        worker_ips: "{{ pod_ips.stdout | split(' ') }}" 
-    - name: Configure haproxy.cfg file 
-      template: 
-        src: "haproxy.cfg.j2" 
-        dest: "/etc/haproxy/haproxy.cfg" 
-    - name: "haproxy service start" 
-      service: 
-        name: haproxy 
-        state: restarted 
-</file> 
- 
-Run the playbook: 
-<code>ansible-playbook haproxy-install.yml --ask-become</code> 
- 
-===== Test HAProxy ===== 
-Point browser to <code>http://<ipaddress_node>:8080</code> 
- 
-Try both using the IP address of all worker nodes. 
- 
-Now edit the file ''k8s-deployment-web.yml'' to set replicas: 1 then apply it 
-<code>ansible-playbook apply k8s-deployment-web.yml'' 
- 
-Confirm that the number of web pods has reduced from 2 to 1, and which node it is running on. 
-<code> 
-kubectl get pods 
-kubectl describe pods 
-</code> 
- 
-Repeat the test of accessing the application on each node IP address. 
- 
-The application works on both! In practice you can set your application DNS to round-robin to point to one, two, or more worker nodes. 
- 
-Let's restore the number of replicas to 2 and re-apply ''k8s-deployment-web.yml''. Confirm that you again have two pods distributed across the nodes. 
- 
-ssh to the node that lost the pod and a new one was created and check. 
-<code>sudo journalctl -xeu haproxy.service</code> 
- 
-Uh oh! The pod was created with a new IP address that doesn't match the haproxy.cfg file we pushed out. 
- 
-Since this is a lab and we can handle the interruption of a regular restart: 
-<code>ansible-playbook haproxy-install.yml --ask-become</code> 
- 
-<code> 
-systemctl status haproxy 
-sudo journalctl -xeu haproxy.service 
-</code> 
- 
-===== Optionally Remove the NodePort ===== 
-We no longer need to configure the NodePort to have external access to the application. 
- 
-The service manifest can be reduced: 
-<file yaml k8s-service-web.yml> 
-apiVersion: v1 
-kind: Service 
-metadata: 
-  name: web-service 
-spec: 
-  selector: 
-    app: web 
-  ports: 
-    - protocol: TCP 
-      port: 80 
-      targetPort: 8080  
-</file> 
- 
-However, you will need to update the haproxy.cfg files and restart haproxy if you need using it. 
- 
-====== Next Step ====== 
-Continue to [[Step 6 - Application Configuration]] 
- 
-Or, back to [[Step 4 - MySQL Server]] or [[Start]] 
- 
-====== Optional ====== 
- 
-===== Load Test ===== 
-These pods are only rated for around sessions each. But let's test that out! 
- 
-[[https://stackify.com/best-way-to-load-test-a-web-server/]] 
- 
-Install autobench on your host system: 
-<code>sudo apt install apache2-utils</code> 
- 
-Here is the basic syntax: ''ab -n <number_of_request> -c <concurrency> <url>'' 
- 
-If you have your HProxy up and running so that your load page is at 
-<code>http://<nodeip>:8080/load.php</code> 
- 
-<code> 
-ab -n 100 -c 10 http://<nodeip>:8080/load.php 
-</code> 
- 
-Compare: 
-  * 10000 connections, 10 current 
-  * compare load.php vs index.php vs test.html 
-  * 50000 connections, 100 current 
-    * the times in ms are maybe 10x higher, but if you login in to the application and use it while the test is running, do you notice any difference? 
-  * if you increase to 10 replicas and reconfigure haproxy, will the performance get better or worse? 
- 
-===== Confirm Liveness Tests are Working ===== 
-In one terminal session set up a watch of your pods and deployment status. 
-<code>watch -d -n 1 'kubectl get pods,deploy'</code> 
- 
-In another terminal session list your web pods, open an interactive shell. 
-<code> 
-kubectl get pods 
-kubectl exec -it <podname> -- sh 
-</sh> 
- 
-Remove the liveness.php file 
-<code>rm liveness.php</code> 
- 
-Watch how long it takes for the pod to be restart. Examine how the ready counters, status, restarts, and age are affected. 
- 
-What happens if you remove the readiness.php file? What happens if you run the command "reboot"? 
- 
-===== What If...? ===== 
-What will happen if your MySQL pod hangs? How will the application pods behave? 
- 
-<code> 
-kubectl get pods 
-kubectl delete pod <name of mysql pod> 
-</code> 
- 
-<code> 
-ansible-playbook destroy-sql.yml 
-# watch the pods restart 5 times; they never become ready 
-# in the meantime, can you access the application? what happens when you try to log in? 
-# if you have the nodeConfig in, or rerun ansible-playbook haproxy-install.yml 
-# after 5 restarts notice the status is CrashLoopBackOff 
-ansible-playbook deploy-sql.yml 
-# do the pods recover automatically? 
-</code> 
  
lab/kubernetes_app/step_5_-_application_pods.1707883352.txt.gz · Last modified: 2024/02/14 04:02 by user