UncleNUC Wiki

Second chance for NUCs

User Tools

Site Tools


lab:kubernetes_app:step_4_-_mysql_server

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
lab:kubernetes_app:step_4_-_mysql_server [2024/02/13 02:22] – updated userlab:kubernetes_app:step_4_-_mysql_server [2024/05/13 18:16] (current) – removed user
Line 1: Line 1:
-====== Step 4 - MySQL Server ====== 
-Now that we have Kubernetes up and running, we will get our MySQL container up and running. We will need storage for the SQL database so that it isn't lost the pod is deleted or recreated. We also want the MySQL container/pod to run on a specific mode which has more resources. 
  
-References: 
-  * [[https://hub.docker.com/_/mysql]] 
-  * [[https://www.tutorialspoint.com/deploying-mysql-on-kubernetes-guide]] 
-  * [[https://ubuntu.com/server/docs/databases-mysql]] 
-  * [[https://stackoverflow.com/questions/15663001/remote-connections-mysql-ubuntu]] 
-  * [[https://blog.devart.com/how-to-restore-mysql-database-from-backup.html]] 
-  * [[https://medium.com/@shubhangi.thakur4532/how-to-deploy-mysql-and-wordpress-on-kubernetes-8ea1260c27dd]] 
- 
-To Do: 
-  * We have the secret creation set up, but we need to actually use it instead of the environment variables 
- 
-====== Create MySQL Deployment ====== 
-To deploy MySQL on Kubernetes, we will use a Deployment object, which is a higher-level abstraction that manages a set of replicas of a pod. The pod contains the MySQL container along with any necessary configuration. 
- 
-At the time of writing the latest mysql image being pulled is version 8.3.0 and runs on Oracle Linux Server 8.9. We will demonstrate using the base image, and mention an alternative image which enables the modern PDO driver. 
- 
-There are three Kubernetes component parts we will use 
-  * Deployment 
-  * PersistentVolume (stored on the host aka Node) 
-  * PersistentVolumeClaim 
- 
-Finally, we will use Ansible to do the work. 
- 
-===== Deployment ===== 
-Create the deployment file for the MySQL server. 
- 
-The image documentation states that by mounting our mysql.sql ConfigMap to the pod at docker-entrypoint-initdb.d, the SQL script should automatically be processed when the pod deploys and there is no database yet. 
- 
-However, in testing, we has to do a combination of things to get the configuration done: 
-  * add a sleep command to allow the MySQL server to initialize fully; it will not accept any connections until that is complete 
-  * we can't used the mounted location for our mysql command, so we copy the file from the mount the root, and execute from there 
- 
-<file yaml k8s-deployment-sql.yml> 
-apiVersion: apps/v1 
-kind: Deployment 
-metadata: 
-  name: mysql-deployment 
-spec: 
-  replicas: 1 
-  selector: 
-    matchLabels: 
-      app: mysql 
-  template: 
-    metadata: 
-      labels: 
-        app: mysql 
-    spec: 
-      nodeSelector: 
-        my-role: sql # restrict scheduling to the node with the label my-role: sql 
-      containers: 
-      - name: mysql 
-        image: mysql 
-        env: 
-        - name: MYSQL_ROOT_PASSWORD 
-          value: yourpassword 
-        ports: 
-        - containerPort: 3306 
-        lifecycle: 
-          postStart: 
-            exec: 
-              command: ["/bin/bash", "-c", "cp /docker-entrypoint-initdb.d/mysql.sql / && sleep 10 && /bin/mysql -uroot -pyourpassword -e \"SOURCE /mysql.sql\""] 
-        volumeMounts: 
-        - name: mysql-persistent-storage 
-          mountPath: /var/lib/mysql 
-        - name: mysql-script 
-          mountPath: /docker-entrypoint-initdb.d 
-        - name: mysql-bind 
-          mountPath: /etc/mysql/conf.d/mysqld.cnf 
-          subPath: mysqld.cnf 
-      volumes: 
-      - name: mysql-persistent-storage 
-        persistentVolumeClaim: 
-          claimName: mysql-pvc 
-      - name: mysql-script 
-        configMap: 
-          name: mysql-setup-script 
-      - name: mysql-bind 
-        configMap: 
-          name: mysql-bind 
-</file> 
- 
-We can launch the deployment manually using kubectl: 
-<code bash>kubectl apply -f k8s-deployment-sql.yml</code> 
- 
-What happened and what didn't happen? Examine the output of the following commands 
-  * kubectl get pods 
-  * kubectl get deployments 
-  * kubectl describe deployments 
- 
-Our manifest refers to something that doesn't exist yet! 
- 
-To remove it, run: <code bash>kubectl delete -f k8s-deployment-sql.yml</code> 
- 
-===== PersistentVolume ===== 
-By default, the MySQL pod does not have persistent storage, which means that any data stored in the pod will be lost if the pod is deleted or recreated. We are going to create a persistent volume that pods can mount. The actual storage is on the host (Node's file system). 
- 
- 
-<file yaml k8s-pv.yml> 
-apiVersion: v1 
-kind: PersistentVolume 
-metadata: 
-  name: my-pv 
-spec: 
-  capacity: 
-    storage: 5Gi 
-  accessModes: 
-    - ReadWriteOnce 
-  hostPath: 
-    path: /data/my-pv 
-</file> 
- 
-We can create the persistent volume manually using ''kubectl'': 
-<code bash>kubectl apply -f k8s-pv.yml</code> 
- 
-Examine the output of the following commands 
-  * ''kubectl get pv'' 
-  * ''kubectl describe pv'' 
- 
-To remove it, run: 
-<code bash>kubectl delete-f k8s-pv.yml</code> 
- 
-===== PersistentVolumeClaim ===== 
-Now we will create a "reservation" for space on the persistent volume and give it a name. 
- 
-<file yaml k8s-pvc.yml> 
-apiVersion: v1 
-kind: PersistentVolumeClaim 
-metadata: 
-  name: mysql-pvc 
-spec: 
-  accessModes: 
-   - ReadWriteOnce 
-  resources: 
-   requests: 
-    storage: 5Gi 
-</file> 
- 
-We can create the persistent volume claim manually using kubectl (but the persistent volume needs to exist): 
-<code bash> 
-kubectl apply -f k8s-pvc.yml 
-</code> 
- 
-Examine the output of the following commands 
-  * kubectl get pvc 
-  * kubectl describe pvc 
- 
-To remove it, run: 
-<code bash> 
-kubectl delete-f k8s-pvc.yml 
-</code> 
- 
-If you create all three things, the pod will come up! But for now, we want to deploy all these with Ansible. And we want to run a script to grant create a user and grant permissions. 
- 
-===== Exposing MySQL with a Service ===== 
-The MySQL server should be accessible from other deployments in Kubernetes, but secure from outside access. By creating a Service object, we create a name and port that can be used to connect to the MySQL server. The other pods will be able to use the DNS name ''sql-service''. 
- 
-<file k8s-service-sql.yml> 
-apiVersion: v1 
-kind: Service 
-metadata: 
-  name: sql-service 
-spec: 
-  selector: 
-    app: mysql 
-  ports: 
-    - protocol: TCP 
-      port: 3306 
-      targetPort: 3306 
-</file> 
- 
-Since we are building everything in the default namespace, the full link to our service is: 
-<code>mysql://sql-service.default.svc.cluster.local:3306/database_name</code> 
- 
-===== Deploying MySQL with Persistent Storage using Ansible ===== 
-This lab deploys everything in the "default" namespace. 
-  * Leverages the .yml files created above 
-  * Mounts the MySQL configuration script so it can be run during the deployment 
-  * Mounts the persistent storage space for the MySQL server to use 
-  * Mounts a custom mysql configuration file to enable connections from other pods (the bind-address statement) 
- 
-<file yaml deploy-sql.yml> 
---- 
-- name: Deploy MySQL with persistent volume 
-  hosts: localhost 
-  connection: local 
-  tasks: 
-    - name: Create ConfigMap for database init 
-      kubernetes.core.k8s: 
-        state: present 
-        namespace: default 
-        definition: 
-          apiVersion: v1 
-          kind: ConfigMap 
-          metadata: 
-            name: mysql-setup-script 
-          data: 
-            mysql.sql: | 
-              CREATE DATABASE IF NOT EXISTS app_db; 
-              CREATE USER IF NOT EXISTS 'appuser'@'%' IDENTIFIED BY 'mypass'; 
-              GRANT ALL ON *.* to 'appuser'@'%'; 
-              CREATE USER IF NOT EXISTS 'appuser'@'localhost' IDENTIFIED BY 'mypass'; 
-              GRANT ALL ON *.* to 'appuser'@'localhost'; 
-              FLUSH PRIVILEGES; 
-              USE app_db; 
-              CREATE TABLE IF NOT EXISTS app_user ( 
-              fname varchar(255), 
-              lname varchar(255), 
-              email varchar(255) Primary Key, 
-              password varchar(64), 
-              address varchar(255), 
-              gender varchar(5), 
-              contact varchar(15), 
-              dob date, 
-              login timestamp); 
-    - name: Create ConfigMap for binding 
-      kubernetes.core.k8s: 
-        state: present 
-        namespace: default 
-        definition: 
-          apiVersion: v1 
-          kind: ConfigMap 
-          metadata: 
-            name: mysql-bind 
-          data: 
-            mysqld.cnf: | 
-              [mysqld] 
-              bind-address = 0.0.0.0 
-    - name: Create Secret 
-      kubernetes.core.k8s: 
-        state: present 
-        definition: "{{ lookup('file', 'k8s-secret-sql.yml') }}" 
-        namespace: default 
-    - name: Create Deployment 
-      kubernetes.core.k8s: 
-        state: present 
-        definition: "{{ lookup('file', 'k8s-deployment-sql.yml') }}" 
-        namespace: default 
-    - name: Create PersistentVolume 
-      kubernetes.core.k8s: 
-        state: present 
-        definition: "{{ lookup('file', 'k8s-pv.yml') }}" 
-        namespace: default 
-    - name: Create PersistentVolumeClaim 
-      kubernetes.core.k8s: 
-        state: present 
-        definition: "{{ lookup('file', 'k8s-pvc.yml') }}" 
-        namespace: default 
-    - name: Create Service 
-      kubernetes.core.k8s: 
-        state: present 
-        definition: "{{ lookup('file', 'k8s-service-sql.yml') }}" 
-        namespace: default 
-</file> 
- 
-Run: ''ansible-playbook deploy-sql.yml'' 
- 
-Take a look at the results by running: 
-  * kubectl get pod,node,deployment,pv,pvc,svc,cm,secret 
- 
-Now confirm that the new MySQL pod is running on "node1", where we want it. 
-  * ''kubectl describe pod'' 
-  * Look for the line similar to: ''Node:    node1/192.168.99.202'' 
- 
-====== Testing MySQL server ====== 
-Congratulations on your brand new MySQL server! In this step we are going do demonstrate that this MySQL server will retain its database across reboots, upgrades, and even deleting and re-creating the deployment. 
- 
-===== Connecting to the MySQL Server Pod ===== 
-First, identify the pod name using 
-<code>kubectl get pods</code> 
- 
-Next, use the pod name to connect interactively. 
-<code>kubectl exec -it mysql-pod-name -- bash</code> 
- 
-For example, 
-<code>kubectl exec -it mysql-deployment-6fd4f7f895-hd8dk -- bash</code> 
- 
-===== Login Using mysql Command ===== 
-After connecting interactively to the MySQL pod, test logging in from the command line 
-  * mysql -uroot -pyourpassword 
-  * mysql -uappuser -pmypass 
- 
-Take a look at what is there by default: (don't forget the trailing semicolon;) 
-  * show grants; 
-  * show databases; 
-  * select user,host from mysql.user; 
-  * use database app_user; 
-  * show tables; 
- 
-Use ''exit'' or ''quit'' to exit. 
- 
-===== Demonstrating Persistence ===== 
-Log back in to the pod and re-launch ''mysql''. 
- 
-Here we will: 
-  - create a database 
-  - select the new database 
-  - create a table in the database 
-  - insert a row 
-  - show the data we inserted 
- 
-<code> 
-CREATE DATABASE test; 
-USE test; 
-CREATE TABLE messages (message VARCHAR(255)); 
-INSERT INTO messages (message) VALUES ('Hello, world!'); 
-SELECT * FROM messages; 
-</code> 
- 
-Now we will create an ansible playbook to remove: 
-  * the deployment and all the pods 
-  * the persistent volume and the volume claim and the service 
- 
-<file yaml remove-sql.yml> 
---- 
-- name: Remove MySQL with persistent volume 
-  hosts: localhost 
-  connection: local 
-  tasks: 
-    - name: Remove ConfigMap 
-      kubernetes.core.k8s: 
-        api_version: v1 
-        kind: ConfigMap 
-        name: mysql-setup-script 
-        state: absent 
-        namespace: default 
-    - name: Remove Secret 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-secret-sql.yml') }}" 
-        namespace: default 
-    - name: Remove Deployment 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-deployment-sql.yml') }}" 
-        namespace: default 
-    - name: Remove PersistentVolume 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-pv.yml') }}" 
-        namespace: default 
-    - name: Remove PersistentVolumeClaim 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-pvc.yml') }}" 
-        namespace: default 
-    - name: Remove Service 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-service-sql.yml') }}" 
-        namespace: default 
-</file> 
- 
-Before you run it, take a look at what is there in Kubernetes. 
-<code>kubectl get pod,node,deployment,pv,pvc,cm,seret</code> 
- 
-Remove it all with ''ansible-playbook remove-sql.yml'' and look what is left. 
-<code>kubectl get pod,node,deployment,pv,pvc,cm,secret</code> 
- 
-Re-deploy again with ''ansible-playbook deploy-sql.yml'' and check again. 
-<code>kubectl get pod,node,deployment,pv,pvc,cm,secret</code> 
- 
-You can re-run that command and watch all the components come up. 
- 
-Now let's check if our data is still here. 
- 
-Identify the new pod name using 
-<code>kubectl get pods</code> 
- 
-Connect to the new pod interactively. (again, substitute the actual pod name) 
-<code>kubectl exec -it mysql-pod-name -- bash</code> 
- 
-Reconnect to mysql 
-<code>mysql -uroot -pyourpassword</code> 
- 
-Check the outputs of these commands: 
-  * show databases; 
-  * use test; 
-  * show tables; 
-  * SELECT * FROM messages; 
- 
-Use ''exit'' or ''quit'' to exit mysql. 
- 
- 
-====== Completely Destroy the MySQL Pod and its Data ====== 
-This time we will erase the __storage__ on the SQL node hosting the MySQL pod. This will remove and clear all persistent data. 
- 
-<file yaml destroy-sql.yml> 
---- 
-- name: Destroy MySQL with persistent volume 
-  hosts: localhost 
-  connection: local 
-  tasks: 
-    - name: Remove ConfigMap 
-      kubernetes.core.k8s: 
-        api_version: v1 
-        kind: ConfigMap 
-        name: mysql-setup-script 
-        state: absent 
-        namespace: default 
-    - name: Remove Secret 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-secret-sql.yml') }}" 
-        namespace: default 
-    - name: Remove Deployment 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-deployment-sql.yml') }}" 
-        namespace: default 
-    - name: Remove PersistentVolume 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-pv.yml') }}" 
-        namespace: default 
-    - name: Remove PersistentVolumeClaim 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-pvc.yml') }}" 
-        namespace: default 
-    - name: Remove Service 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-service-sql.yml') }}" 
-        namespace: default 
-- name: Clean hostPath directoy 
-  hosts: sql 
-  become: true 
-  tasks: 
-    - name: Clean hostPath directory 
-      file: 
-        path: /data/my-pv 
-        state: absent 
-</file> 
- 
-Now you can destroy and re-deploy and the data will be gone. 
-<code> 
-ansible-playbook destroy-sql.yml 
-ansible-playbook deploy-sql.yml 
-</code> 
- 
-====== Next Step ====== 
-Continue to [[Step 5 - Application Pods]] 
- 
-Or back to [[Step 3 - Set Up Kubernetes]] or [[Start]] 
- 
-====== Optional ====== 
-===== Backups and Restoration ===== 
-For a production application, you would replicate your MySQL database to another node. For our lab you could simply stake a snapshot of the SQL node's data directory at ''/data/my-pv''. 
- 
-You may want to use the ''mysql'' command itself to dump a database to file or restore from file. 
- 
-<code> 
-mysql -u [user name] –p [target_database_name] < [dumpfilename.sql] 
-</code> 
- 
-The following is an example to back up and then restore a database from/to a docker container. You can adapt the same commands to work with your MySQL pod under Kubernetes. 
- 
-<code>$ docker exec some-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /some/path/on/your/host/all-databases.sql</code> 
- 
-<code>$ docker exec -i some-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /some/path/on/your/host/all-databases.sql</code> 
- 
-===== Allowing Remote Access ===== 
-The reference mysql image we are using only allows Unix socket connections from the localhost. Since it is key for our application pods to access our MySQL service, we added a configuration file to allow remote TCP connections. 
- 
-Next, we granted access from any IP addresses for the user ''appuser''. The ''%'' as the host means any host can connect. 
- 
-How is this secured? The service we configured for SQL only exposes the service internal to the k8s node. It is not exposed outside the node, and no external connection can be made. 
lab/kubernetes_app/step_4_-_mysql_server.1707790943.txt.gz · Last modified: 2024/02/13 02:22 by user