This is an old revision of the document!
Table of Contents
Step 4 - MySQL Server
Now that we have Kubernetes up and running, we will get our MySQL container up and running. We will need storage for the SQL database so that it isn't lost the pod is deleted or recreated. We also want the MySQL container/pod to run on a specific mode which has more resources.
References:
IMPORTANT In this lab we are using the environment variable approach to set the MySQL root user password. See the Optional section for ideas to improve on this in production.
Create MySQL Deployment
To deploy MySQL on Kubernetes, we will use a Deployment object, which is a higher-level abstraction that manages a set of replicas of a pod. The pod contains the MySQL container along with any necessary configuration.
At the time of writing the latest mysql image being pulled is version 8.3.0 and runs on Oracle Linux Server 8.9. We will demonstrate using the base image, and mention an alternative image which enables the modern PDO driver.
There are three Kubernetes component parts we will use
- Deployment
- PersistentVolume (stored on the host aka Node)
- PersistentVolumeClaim
Finally, we will use Ansible to do the work.
Deployment
Create the deployment file for the MySQL server.
The image documentation states that by mounting our mysql.sql ConfigMap to the pod at docker-entrypoint-initdb.d, the SQL script should automatically be processed when the pod deploys and there is no database yet.
However, in testing, we has to do a combination of things to get the configuration done:
- add a sleep command to allow the MySQL server to initialize fully; it will not accept any connections until that is complete
- we can't used the mounted location for our mysql command, so we copy the file from the mount the root, and execute from there
- k8s-deployment-sql.yml
apiVersion: apps/v1 kind: Deployment metadata: name: mysql-deployment spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: nodeSelector: my-role: sql # restrict scheduling to the node with the label my-role: sql containers: - name: mysql image: mysql env: - name: MYSQL_ROOT_PASSWORD value: yourpassword ports: - containerPort: 3306 lifecycle: postStart: exec: command: ["/bin/bash", "-c", "cp /docker-entrypoint-initdb.d/mysql.sql / && sleep 10 && /bin/mysql -uroot -pyourpassword -e \"SOURCE /mysql.sql\""] volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: mysql-script mountPath: /docker-entrypoint-initdb.d - name: mysql-bind mountPath: /etc/mysql/conf.d/mysqld.cnf subPath: mysqld.cnf volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pvc - name: mysql-script configMap: name: mysql-setup-script - name: mysql-bind configMap: name: mysql-bind
We can launch the deployment manually using kubectl:
kubectl apply -f k8s-deployment-sql.yml
What happened and what didn't happen? Examine the output of the following commands
- kubectl get pods
- kubectl get deployments
- kubectl describe deployments
Our manifest refers to something that doesn't exist yet!
To remove it, run:
kubectl delete -f k8s-deployment-sql.yml
PersistentVolume
By default, the MySQL pod does not have persistent storage, which means that any data stored in the pod will be lost if the pod is deleted or recreated. We are going to create a persistent volume that pods can mount. The actual storage is on the host (Node's file system).
- k8s-pv.yml
apiVersion: v1 kind: PersistentVolume metadata: name: my-pv spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: /data/my-pv
We can create the persistent volume manually using kubectl
:
kubectl apply -f k8s-pv.yml
Examine the output of the following commands
kubectl get pv
kubectl describe pv
To remove it, run:
kubectl delete -f k8s-pv.yml
PersistentVolumeClaim
Now we will create a “reservation” for space on the persistent volume and give it a name.
- k8s-pvc.yml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
We can create the persistent volume claim manually using kubectl (but the persistent volume needs to exist):
kubectl apply -f k8s-pvc.yml
Examine the output of the following commands
- kubectl get pvc
- kubectl describe pvc
To remove it, run:
kubectl delete-f k8s-pvc.yml
If you create all three things, the pod will come up! But for now, we want to deploy all these with Ansible. And we want to run a script to grant create a user and grant permissions.
Exposing MySQL with a Service
The MySQL server should be accessible from other deployments in Kubernetes, but secure from outside access. By creating a Service object, we create a name and port that can be used to connect to the MySQL server. The other pods will be able to use the DNS name sql-service
.
apiVersion: v1 kind: Service metadata: name: sql-service spec: selector: app: mysql ports: - protocol: TCP port: 3306 targetPort: 3306
Since we are building everything in the default namespace, the full link to our service is:
mysql://sql-service.default.svc.cluster.local:3306/database_name
Deploying MySQL with Persistent Storage using Ansible
This lab deploys everything in the “default” namespace.
- Leverages the .yml files created above
- Mounts the MySQL configuration script so it can be run during the deployment
- Mounts the persistent storage space for the MySQL server to use
- Mounts a custom mysql configuration file to enable connections from other pods (the bind-address statement)
- deploy-sql.yml
--- - name: Deploy MySQL with persistent volume hosts: localhost connection: local tasks: - name: Create ConfigMap for database init kubernetes.core.k8s: state: present namespace: default definition: apiVersion: v1 kind: ConfigMap metadata: name: mysql-setup-script data: mysql.sql: | CREATE DATABASE IF NOT EXISTS app_db; CREATE USER IF NOT EXISTS 'appuser'@'%' IDENTIFIED BY 'mypass'; GRANT ALL ON *.* to 'appuser'@'%'; CREATE USER IF NOT EXISTS 'appuser'@'localhost' IDENTIFIED BY 'mypass'; GRANT ALL ON *.* to 'appuser'@'localhost'; FLUSH PRIVILEGES; USE app_db; CREATE TABLE IF NOT EXISTS app_user ( fname varchar(255), lname varchar(255), email varchar(255) Primary Key, password varchar(64), address varchar(255), contact varchar(15), dob date, login timestamp); - name: Create ConfigMap for binding kubernetes.core.k8s: state: present namespace: default definition: apiVersion: v1 kind: ConfigMap metadata: name: mysql-bind data: mysqld.cnf: | [mysqld] bind-address = 0.0.0.0 - name: Create Deployment kubernetes.core.k8s: state: present definition: "{{ lookup('file', 'k8s-deployment-sql.yml') }}" namespace: default - name: Create PersistentVolume kubernetes.core.k8s: state: present definition: "{{ lookup('file', 'k8s-pv.yml') }}" namespace: default - name: Create PersistentVolumeClaim kubernetes.core.k8s: state: present definition: "{{ lookup('file', 'k8s-pvc.yml') }}" namespace: default - name: Create Service kubernetes.core.k8s: state: present definition: "{{ lookup('file', 'k8s-service-sql.yml') }}" namespace: default
Run: ansible-playbook deploy-sql.yml
Take a look at the results by running:
- kubectl get pod,node,deployment,pv,pvc,svc,cm
Now confirm that the new MySQL pod is running on “node1”, where we want it.
kubectl describe pod
- Look for the line similar to:
Node: node1/192.168.99.202
Testing MySQL server
Congratulations on your brand new MySQL server! In this step we are going do demonstrate that this MySQL server will retain its database across reboots, upgrades, and even deleting and re-creating the deployment.
Connecting to the MySQL Server Pod
First, identify the pod name using
kubectl get pods
Next, use the pod name to connect interactively.
kubectl exec -it mysql-pod-name -- bash
For example,
kubectl exec -it mysql-deployment-6fd4f7f895-hd8dk -- bash
Login Using mysql Command
After connecting interactively to the MySQL pod, test logging in from the command line
- mysql -uroot -pyourpassword
- mysql -uappuser -pmypass
Take a look at what is there by default: (don't forget the trailing semicolon;)
- show grants;
- show databases;
- select user,host from mysql.user;
- use database app_user;
- show tables;
Use exit
or quit
to exit.
Demonstrating Persistence
Log back in to the pod and re-launch mysql
.
Here we will:
- create a database
- select the new database
- create a table in the database
- insert a row
- show the data we inserted
CREATE DATABASE test; USE test; CREATE TABLE messages (message VARCHAR(255)); INSERT INTO messages (message) VALUES ('Hello, world!'); SELECT * FROM messages;
Now we will create an ansible playbook to remove:
- the deployment and all the pods
- the persistent volume and the volume claim and the service
- remove-sql.yml
--- - name: Remove MySQL with persistent volume hosts: localhost connection: local tasks: - name: Remove ConfigMap kubernetes.core.k8s: api_version: v1 kind: ConfigMap name: mysql-setup-script state: absent namespace: default - name: Remove Deployment kubernetes.core.k8s: state: absent definition: "{{ lookup('file', 'k8s-deployment-sql.yml') }}" namespace: default - name: Remove PersistentVolume kubernetes.core.k8s: state: absent definition: "{{ lookup('file', 'k8s-pv.yml') }}" namespace: default - name: Remove PersistentVolumeClaim kubernetes.core.k8s: state: absent definition: "{{ lookup('file', 'k8s-pvc.yml') }}" namespace: default - name: Remove Service kubernetes.core.k8s: state: absent definition: "{{ lookup('file', 'k8s-service-sql.yml') }}" namespace: default
Before you run it, take a look at what is there in Kubernetes.
kubectl get pod,node,deployment,pv,pvc,cm
Remove it all with ansible-playbook remove-sql.yml
and look what is left.
kubectl get pod,node,deployment,pv,pvc,cm
Re-deploy again with ansible-playbook deploy-sql.yml
and check again.
kubectl get pod,node,deployment,pv,pvc,cm
You can re-run that command and watch all the components come up.
Now let's check if our data is still here.
Identify the new pod name using
kubectl get pods
Connect to the new pod interactively. (again, substitute the actual pod name)
kubectl exec -it mysql-pod-name -- bash
Reconnect to mysql
mysql -uroot -pyourpassword
Check the outputs of these commands:
- show databases;
- use test;
- show tables;
- SELECT * FROM messages;
Use exit
or quit
to exit mysql.
Completely Destroy the MySQL Pod and its Data
This time we will erase the storage on the SQL node hosting the MySQL pod. This will remove and clear all persistent data.
- destroy-sql.yml
--- - name: Destroy MySQL with persistent volume hosts: localhost connection: local tasks: - name: Remove ConfigMap kubernetes.core.k8s: api_version: v1 kind: ConfigMap name: mysql-setup-script state: absent namespace: default - name: Remove ConfigMap kubernetes.core.k8s: api_version: v1 kind: ConfigMap name: mysql-bind state: absent namespace: default - name: Remove Deployment kubernetes.core.k8s: state: absent definition: "{{ lookup('file', 'k8s-deployment-sql.yml') }}" namespace: default - name: Remove PersistentVolume kubernetes.core.k8s: state: absent definition: "{{ lookup('file', 'k8s-pv.yml') }}" namespace: default - name: Remove PersistentVolumeClaim kubernetes.core.k8s: state: absent definition: "{{ lookup('file', 'k8s-pvc.yml') }}" namespace: default - name: Remove Service kubernetes.core.k8s: state: absent definition: "{{ lookup('file', 'k8s-service-sql.yml') }}" namespace: default - name: Clean hostPath directoy hosts: sql become: true tasks: - name: Clean hostPath directory file: path: /data/my-pv state: absent
Now you can destroy and re-deploy and the data will be gone.
ansible-playbook destroy-sql.yml ansible-playbook deploy-sql.yml
Next Step
Continue to Step 5 - Application Pods
Or back to Step 3 - Set Up Kubernetes or Start
Optional
Use k8s Secrets to Pass MySQL root User Password
In this that we are using the environment variable approach to set the MySQL root user password for simplicity's sake. For a production environment, you want to remove passwords from all manifest (yml) files.
This is the poor way to do it in our lab:
env: - name: MYSQL_ROOT_PASSWORD value: yourpassword
There are a few ways to improve this
create the k8s secret todo
in k8s-deploy-sql.yml pull the secret
env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: key: MYSQL_ROOT_PASSWORD name: mysql-root-password
Backups and Restoration
For a production application, you would replicate your MySQL database to another node. For our lab you could simply stake a snapshot of the SQL node's data directory at /data/my-pv
.
You may want to use the mysql
command itself to dump a database to file or restore from file.
mysql -u [user name] –p [target_database_name] < [dumpfilename.sql]
The following is an example to back up and then restore a database from/to a docker container. You can adapt the same commands to work with your MySQL pod under Kubernetes.
$ docker exec some-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /some/path/on/your/host/all-databases.sql
$ docker exec -i some-mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /some/path/on/your/host/all-databases.sql
Allowing Remote Access
The reference mysql image we are using only allows Unix socket connections from the localhost. Since it is key for our application pods to access our MySQL service, we added a configuration file to allow remote TCP connections.
Next, we granted access from any IP addresses for the user appuser
. The %
as the host means any host can connect.
How is this secured? The service we configured for SQL only exposes the service internal to the k8s node. It is not exposed outside the node, and no external connection can be made.