UncleNUC Wiki

Second chance for NUCs

User Tools

Site Tools


lab:kubernetes_app:step_4_-_mysql_server

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
lab:kubernetes_app:step_4_-_mysql_server [2024/02/09 22:29] – updated userlab:kubernetes_app:step_4_-_mysql_server [2024/05/13 18:16] (current) – removed user
Line 1: Line 1:
-====== Step 4 - MySQL Server ====== 
-Now that we have Kubernetes up and running, we will get our MySQL container up and running. We will need storage for the SQL database so that it isn't lost the pod is deleted or recreated. We also want the MySQL container/pod to run on a specific mode which has more resources. 
- 
-References: 
-  * [[https://hub.docker.com/_/mysql]] 
-  * [[https://www.tutorialspoint.com/deploying-mysql-on-kubernetes-guide]] 
-  * [[https://ubuntu.com/server/docs/databases-mysql]] 
-  * [[https://stackoverflow.com/questions/15663001/remote-connections-mysql-ubuntu]] 
-  * [[https://blog.devart.com/how-to-restore-mysql-database-from-backup.html]] 
-  * [[https://medium.com/@shubhangi.thakur4532/how-to-deploy-mysql-and-wordpress-on-kubernetes-8ea1260c27dd]] 
- 
-To Do: 
-  * We have the secret creation, but we need to actually use it 
-  * The MySQL server works with sockets, but need to prove it works over IP from another deployment's pod 
-    * had REAL difficulties with updating the /etc/my.cnf file.... 
- 
-====== Create MySQL Deployment ====== 
-To deploy MySQL on Kubernetes, we will use a Deployment object, which is a higher-level abstraction that manages a set of replicas of a pod. The pod contains the MySQL container along with any necessary configuration. 
- 
-At the time of writing the latest image being pulled is version 8.3.0 and runs on Oracle Linux Server 8.9. 
- 
-There are three parts we will use 
-  * deployment 
-  * PersistentVolume 
-  * PersistentVolumeClaim 
- 
-Finally, we will use Ansible to do the work. 
- 
-===== Deployment ===== 
-<file yaml k8s-deployment-sql.yml> 
-apiVersion: apps/v1 
-kind: Deployment 
-metadata: 
-  name: mysql-deployment 
-spec: 
-  replicas: 1 
-  selector: 
-    matchLabels: 
-      app: mysql 
-  template: 
-    metadata: 
-      labels: 
-        app: mysql 
-    spec: 
-      nodeSelector: 
-        my-role: sql # restrict scheduling to the node with the label my-role: sql 
-      containers: 
-      - name: mysql 
-        image: mysql 
-        env: 
-        - name: MYSQL_ROOT_PASSWORD 
-          value: yourpassword 
-        ports: 
-        - containerPort: 3306 
-        volumeMounts: 
-        - name: mysql-persistent-storage 
-          mountPath: /var/lib/mysql 
-      volumes: 
-      - name: mysql-persistent-storage 
-        persistentVolumeClaim: 
-          claimName: mysql-pvc 
-</file> 
- 
-We can launch the deployment manually using kubectl: 
-<code bash> 
-kubectl apply -f k8s-deployment-sql.yml 
-</code> 
- 
-What happened and what didn't happen? Examine the output of the following commands 
-  * kubectl get pods 
-  * kubectl get deployments 
-  * kubectl describe deployments 
- 
-To remove it, run: 
-<code bash> 
-kubectl delete -f k8s-deployment-sql.yml 
-</code> 
- 
-Our manifest refers to something that doesn't exist yet! 
- 
-===== PersistentVolume ===== 
-By default, the MySQL pod does not have persistent storage, which means that any data stored in the pod will be lost if the pod is deleted or recreated. We are going to create a persistent volume that pods can mount. 
-<file yaml k8s-pv.yml> 
-apiVersion: v1 
-kind: PersistentVolume 
-metadata: 
-  name: my-pv 
-spec: 
-  capacity: 
-    storage: 5Gi 
-  accessModes: 
-    - ReadWriteOnce 
-  hostPath: 
-    path: /data/my-pv 
-</file> 
- 
-We can create the persistent volume manually using kubectl: 
-<code bash> 
-kubectl apply -f k8s-pv.yml 
-</code> 
- 
-Examine the output of the following commands 
-  * kubectl get pv 
-  * kubectl describe pv 
- 
-To remove it, run: 
-<code bash> 
-kubectl delete-f k8s-pv.yml 
-</code> 
- 
-===== PersistentVolumeClaim ===== 
-Now we will create a "reservation" for space on the persistent volume and give it a name. 
- 
-<file yaml k8s-pvc.yml> 
-apiVersion: v1 
-kind: PersistentVolumeClaim 
-metadata: 
-  name: mysql-pvc 
-spec: 
-  accessModes: 
-   - ReadWriteOnce 
-  resources: 
-   requests: 
-    storage: 5Gi 
-</file> 
- 
-We can create the persistent volume claim manually using kubectl (but the persistent volume needs to exist): 
-<code bash> 
-kubectl apply -f k8s-pvc.yml 
-</code> 
- 
-Examine the output of the following commands 
-  * kubectl get pvc 
-  * kubectl describe pvc 
- 
-To remove it, run: 
-<code bash> 
-kubectl delete-f k8s-pvc.yml 
-</code> 
- 
-If you create all three things, the pod will come up! But for now, we want to deploy all these with Ansible. And we want to run a script to grant create a user and grant permissions. 
- 
-===== Exposing MySQL with a Service ===== 
-The MySQL server should be accessible from other deployments in Kubernetes, but secure from outside access. By creating a Service object, we create a stable IP address and port that can be used to connect to the MySQL server. 
- 
-<file k8s-service-sql.yml> 
-apiVersion: v1 
-kind: Service 
-metadata: 
-  name: sql-service 
-spec: 
-  selector: 
-    app: mysql 
-  ports: 
-    - protocol: TCP 
-      port: 3306 
-      targetPort: 3306 
-</file> 
- 
-<code>mysql://mysql-service.default.svc.cluster.local:3306L/database_name</code> 
- 
-===== Deploying MySQL with Persistent Storage using Ansible ===== 
-This lab deploys everything in the "default" namespace. 
- 
-<file yaml deploy-sql.yml> 
---- 
-- name: Deploy MySQL with persistent volume 
-  hosts: localhost 
-  connection: local 
-  tasks: 
-    - name: Create ConfigMap 
-      kubernetes.core.k8s: 
-        state: present 
-        namespace: default 
-        definition: 
-          apiVersion: v1 
-          kind: ConfigMap 
-          metadata: 
-            name: mysql-setup-script 
-          data: 
-            grant_remote_access.sh: | 
-              #!/bin/bash 
-              /usr/bin/mysql -u root -pyourpassword <<EOF 
-              CREATE USER 'appuser'@'localhost' IDENTIFIED BY 'mypass'; 
-              CREATE USER 'appuser'@'%' IDENTIFIED BY 'mypass'; 
-              GRANT ALL ON *.* TO 'appuser'@'localhost'; 
-              GRANT ALL ON *.* TO 'appuser'@'%'; 
-              FLUSH PRIVILEGES; 
-              CREATE DATABASE app_db; 
-              EXIT; 
-              EOF 
-    - name: Create Secret 
-      kubernetes.core.k8s: 
-        state: present 
-        definition: "{{ lookup('file', 'k8s-secret-sql.yml') }}" 
-        namespace: default 
-    - name: Create Deployment 
-      kubernetes.core.k8s: 
-        state: present 
-        definition: "{{ lookup('file', 'k8s-deployment-sql.yml') }}" 
-        namespace: default 
-    - name: Create PersistentVolume 
-      kubernetes.core.k8s: 
-        state: present 
-        definition: "{{ lookup('file', 'k8s-pv.yml') }}" 
-        namespace: default 
-    - name: Create PersistentVolumeClaim 
-      kubernetes.core.k8s: 
-        state: present 
-        definition: "{{ lookup('file', 'k8s-pvc.yml') }}" 
-        namespace: default 
-    - name: Create Service 
-      kubernetes.core.k8s: 
-        state: present 
-        definition: "{{ lookup('file', 'k8s-service-sql.yml') }}" 
-        namespace: default 
-</file> 
- 
-Run: ''ansible-playbook deploy-sql.yml'' 
- 
-Take a look at the results by running: 
-  * kubectl get pod,node,deployment,pv,pvc,svc,cm,secret 
- 
-Now confirm that the new MySQL pod is running on "node1", where we want it. 
-  * ''kubectl describe pod'' 
-  * Look for the line similar to: ''Node:    node1/192.168.99.202'' 
- 
-====== Testing MySQL server ====== 
-Congratulations on your brand new MySQL server! In this step we are going do demonstrate that this MySQL server will retain its database across reboots, upgrades, and even deleting and re-creating the deployment. 
- 
-===== Connecting to the MySQL Server Pod ===== 
-First, identify the pod name using 
-<code>kubectl get pods</code> 
- 
-Next, use the pod name to connect interactively. 
-<code>kubectl exec -it mysql-pod-name -- bash</code> 
- 
-For example, 
-<code>kubectl exec -it mysql-deployment-6fd4f7f895-hd8dk -- bash</code> 
- 
-===== Login Using mysql Command ===== 
-The only account existing by default is ''root'', and only logins from ''localhost'' are allowed. The file ''k8s-deployment-sql.yml'' has the password in it. In this case, the password is ''yourpassword'' 
- 
-Log in using the credentials: (__no space__ between -p and your password is permitted) 
-<code bash> 
-mysql -uroot -pyourpassword 
-</code> 
- 
-Take a look at what is there by default: (don't forget the trailing semicolon;) 
-  * show grants; 
-  * show databases; 
- 
-Use ''exit'' or ''quit'' to exit. 
- 
-NOTE You can also log in with the user ''meme'' and password ''mypass'', which was added during container creation 
- 
-===== Demonstrating Persistence ===== 
-Log back in to the pod and re-launch ''mysql''. 
- 
-Here we will: 
-  - create a database 
-  - select the new database 
-  - create a table in the database 
-  - insert a row 
-  - show the data we inserted 
- 
-<code> 
-CREATE DATABASE test; 
-USE test; 
-CREATE TABLE messages (message VARCHAR(255)); 
-INSERT INTO messages (message) VALUES ('Hello, world!'); 
-SELECT * FROM messages; 
-</code> 
- 
-Now we will create an ansible playbook the remove not only the deployment but also the persistent storage. 
- 
-<file yaml destroy-sql.yml> 
---- 
-- name: Destroy MySQL with persistent volume 
-  hosts: localhost 
-  connection: local 
-  tasks: 
-    - name: Remove ConfigMap 
-      kubernetes.core.k8s: 
-        api_version: v1 
-        kind: ConfigMap 
-        name: mysql-setup-script 
-        state: absent 
-        namespace: default 
-    - name: Remove Secret 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-secret-sql.yml') }}" 
-        namespace: default 
-    - name: Remove Deployment 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-deployment-sql.yml') }}" 
-        namespace: default 
-    - name: Remove PersistentVolume 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-pv.yml') }}" 
-        namespace: default 
-    - name: Remove PersistentVolumeClaim 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-pvc.yml') }}" 
-        namespace: default 
-    - name: Remove Service 
-      kubernetes.core.k8s: 
-        state: absent 
-        definition: "{{ lookup('file', 'k8s-service-sql.yml') }}" 
-        namespace: default 
-</file> 
- 
-See what exists before you destroy everything: 
-<code>kubectl get pod,node,deployment,pv,pvc,cm,seret</code> 
- 
-Run the playbook ''ansible-playbook destroy-sql.yml'' 
- 
-See what exists after you destroyed everything: 
-<code>kubectl get pod,node,deployment,pv,pvc,cm,secret</code> 
- 
-Deploy again 
-<code>ansible-playbook deploy-sql.yml</code> 
- 
-Watch as everything comes back up: 
-<code>kubectl get pod,node,deployment,pv,pvc,cm,secret</code> 
- 
-Identify the new pod name using 
-<code>kubectl get pods</code> 
- 
-Connect to the new pod interactively. (again, substitute the actual pod name) 
-<code>kubectl exec -it mysql-pod-name -- bash</code> 
- 
-Reconnect to mysql 
-<code>mysql -uroot -pyourpassword</code> 
- 
-Check the outputs of these commands: 
-  * show databases; 
-  * use test; 
-  * show tables; 
-  * SELECT * FROM messages; 
- 
-Now let's clean up the test database and table: 
-  * DROP TABLE messages; 
-  * DROP DATABASE test; 
- 
-Use ''exit'' or ''quit'' to exit mysql. 
- 
-====== Next Step ====== 
-Continue to [[Step 5 - Application Pods]] 
- 
-Or back to [[Step 3 - Set Up Kubernetes]] or [[Start]] 
- 
-====== Optional ====== 
-You can restore a MySQL database dump to play with. Here is the command. 
- 
-<code> 
-mysql -u [user name] –p [target_database_name] < [dumpfilename.sql] 
-</code> 
- 
-<code>$ docker exec some-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /some/path/on/your/host/all-databases.sql</code> 
- 
-<code>$ docker exec -i some-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /some/path/on/your/host/all-databases.sql</code> 
- 
-===== Allowing Remote Access ===== 
-We created a  user ''meme'' and granted permissions to connect from any IP address. 
- 
-<code> 
-CREATE USER 'meme'@'localhost' IDENTIFIED BY 'mypass'; 
-CREATE USER 'meme'@'%' IDENTIFIED BY 'mypass'; 
-GRANT ALL ON *.* TO 'meme'@'localhost'; 
-GRANT ALL ON *.* TO 'meme'@'%'; 
-FLUSH PRIVILEGES;  
-EXIT; 
-</code> 
- 
-<code> 
- 
-What ports is MySQL listening on? 
- 
-See the configuration files at: 
-  * /etc/my.cnf 
-  * /etc/my.cnf.d/ (none by default) 
-  * /etc/mysql/conf.d/ (none by default) 
- 
-Normally you would sudo vi /etc/mysql/mysql.conf.d/mysqld.cnf 
-  * Locate and Modify the "bind-address" Option 
-    * Find the line that says bind-address = 127.0.0.1 (which restricts connections to localhost). 
-    * Either: 
-      * Comment it out: Add a # at the beginning of the line to disable it. 
-      * Change it to the server's IP address: If you prefer a specific IP, replace 127.0.0.1 with the IP of the MySQL server (e.g., bind-address = 192.168.99.17). 
-    * Save and Restart MySQL: ''sudo systemctl restart mysql'' 
- 
-Since the bind-address isn't listed, how can we double-check? This image is Oracle Linux Server 8.9 and doesn't have netstat. Or ps.  
- 
-It looks like MySQL is only using the Unix socket, not any IP address. 
  
lab/kubernetes_app/step_4_-_mysql_server.1707517759.txt.gz · Last modified: 2024/02/09 22:29 by user