This is an old revision of the document!
Table of Contents
Step 3 - Set up Kubernetes
In our previous step we deployed our fleet of VMs. Login in to your host system as the ansible
user. We will be working out of user ansible's home directory.
Now we are going to install Kubernetes on the VMs:
- The first will be the Kubernetes (k8s) master node
- The second will be the node that will run the SQL service (MySQL)
- The remaining VMs will be the “worker” nodes
Purpose:
- Demonstrate a running a web application workload on Kubernetes
References
- Kubernetes: Up & Running by O'Reilly
Update the inventory and ansible.cfg Files
Modify the inventory
file to assign nodes to groups. Here is an example assigning my server IPs to each group. You will need to use your own IP addresses from your network.
- inventory
[master] 192.168.99.201 [sql] 192.168.99.202 [workers] 192.168.99.203 192.168.99.204
Configure the ansible.cfg
file to use the updated inventory file.
- ansible.cfg
[defaults] inventory = inventory
Update the hosts File on All Nodes
Update the /etc/hosts files on all the nodes with the hostnames. This allows all nodes to resolve each other by name, but without DNS.
- updatehostsfile.yml
--- - name: Update etc/hosts file hosts: all, localhost gather_facts: true tasks: - name: Populate all /etc/hosts files tags: etchostsupdate become: true become_user: root lineinfile: path: "/etc/hosts" regexp: '.*{{ item }}$' line: "{{ hostvars[item]['ansible_default_ipv4'].address }}\t{{ hostvars[item]['ansible_hostname'] }}\t{{ hostvars[item]['ansible_hostname'] }}" state: present with_items: '{{ groups.all }}'
Run the playbook, providing the ansible user password you set when prompted.
ansible-playbook updatehostsfile.yml --ask-become
Note that you can now run an ansible “ping” with simply ansible all -m ping
Install Kubernetes using Ansible
Check https://github.com/torgeirl/kubernetes-playbooks for updates to the playbooks below
- Install some prerequisites on ALL the Kubernetes nodes
- kube-dependencies.yml
--- - hosts: all become: true tasks: - fail: msg: "OS should be Ubuntu 22.04, not {{ ansible_distribution }} {{ ansible_distribution_version }}" when: ansible_distribution != 'Ubuntu' or ansible_distribution_version != '22.04' - name: Update APT packages apt: update_cache: true - name: Reboot and wait for reboot to complete reboot: - name: Disable SWAP (Kubeadm requirement) shell: | swapoff -a - name: Disable SWAP in fstab (Kubeadm requirement) replace: path: /etc/fstab regexp: '^([^#].*?\sswap\s+sw\s+.*)$' replace: '# \1' - name: Create an empty file for the Containerd module copy: content: "" dest: /etc/modules-load.d/containerd.conf force: false - name: Configure modules for Containerd blockinfile: path: /etc/modules-load.d/containerd.conf block: | overlay br_netfilter - name: Create an empty file for Kubernetes sysctl params copy: content: "" dest: /etc/sysctl.d/99-kubernetes-cri.conf force: false - name: Configure sysctl params for Kubernetes lineinfile: path: /etc/sysctl.d/99-kubernetes-cri.conf line: "{{ item }}" with_items: - 'net.bridge.bridge-nf-call-iptables = 1' - 'net.ipv4.ip_forward = 1' - 'net.bridge.bridge-nf-call-ip6tables = 1' - name: Apply sysctl params without reboot command: sysctl --system - name: Install APT Transport HTTPS apt: name: apt-transport-https state: present - name: Add Docker apt-key apt_key: url: https://download.docker.com/linux/ubuntu/gpg state: present - name: Add Docker's APT repository apt_repository: repo: "deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable" filename: "docker-{{ ansible_distribution_release }}" - name: Add Kubernetes apt-key apt_key: url: https://packages.cloud.google.com/apt/doc/apt-key.gpg state: present - name: Add Kubernetes' APT repository apt_repository: repo: deb https://apt.kubernetes.io/ kubernetes-xenial main state: present filename: 'kubernetes' - name: Install Containerd apt: name: containerd.io state: present - name: Create Containerd directory file: path: /etc/containerd state: directory - name: Add Containerd configuration shell: /usr/bin/containerd config default > /etc/containerd/config.toml - name: Configuring the systemd cgroup driver for Containerd lineinfile: path: /etc/containerd/config.toml regexp: ' SystemdCgroup = false' line: ' SystemdCgroup = true' - name: Enable the Containerd service and start it systemd: name: containerd state: restarted enabled: true daemon-reload: true - name: Install Kubelet apt: name: kubelet=1.26.* state: present update_cache: true - name: Install Kubeadm apt: name: kubeadm=1.26.* state: present - name: Enable the Kubelet service, and enable it persistently service: name: kubelet enabled: true - name: Load br_netfilter kernel module modprobe: name: br_netfilter state: present - name: Set bridge-nf-call-iptables sysctl: name: net.bridge.bridge-nf-call-iptables value: 1 - name: Set ip_forward sysctl: name: net.ipv4.ip_forward value: 1 - name: Check Kubelet args in Kubelet config shell: grep "^Environment=\"KUBELET_EXTRA_ARGS=" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf || true register: check_args - name: Add runtime args in Kubelet config lineinfile: dest: "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf" line: "Environment=\"KUBELET_EXTRA_ARGS= --runtime-cgroups=/system.slice/containerd.service --container-runtime-endpoint=unix:///run/containerd/containerd.sock\"" insertafter: '\[Service\]' when: check_args.stdout == "" - name: Reboot and wait for reboot to complete reboot: - hosts: master become: true tasks: - name: Install Kubectl apt: name: kubectl=1.26.* state: present force: true # allow downgrades
- Run
ansible-playbook kube-dependencies.yml
- Configure kubernetes cluster on master node
- master.yml
--- - hosts: master become: true tasks: - name: Create an empty file for Kubeadm configuring copy: content: "" dest: /etc/kubernetes/kubeadm-config.yaml force: false - name: Configuring the container runtime including its cgroup driver blockinfile: path: /etc/kubernetes/kubeadm-config.yaml block: | kind: ClusterConfiguration apiVersion: kubeadm.k8s.io/v1beta3 networking: podSubnet: "10.244.0.0/16" --- kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 runtimeRequestTimeout: "15m" cgroupDriver: "systemd" systemReserved: cpu: 100m memory: 350M kubeReserved: cpu: 100m memory: 50M enforceNodeAllocatable: - pods - name: Initialize the cluster (this could take some time) shell: kubeadm init --config /etc/kubernetes/kubeadm-config.yaml >> cluster_initialized.log args: chdir: /home/ansible creates: cluster_initialized.log - name: Create .kube directory become: true become_user: ansible file: path: $HOME/.kube state: directory mode: 0755 - name: Copy admin.conf to user's kube config copy: src: /etc/kubernetes/admin.conf dest: /home/ansible/.kube/config remote_src: true owner: ansible - name: Install Pod network become: true become_user: ansible shell: kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml >> pod_network_setup.log args: chdir: $HOME creates: pod_network_setup.log
- run
ansible-playbook master.yml
- SSH to the master and verify the master nodes gets status
Ready
ssh controller kubectl get nodes
Set up the SQL Node
Modify the file to replace MASTERIP with the IP address of your master node in two (2) places
- sql.yml
--- - hosts: master become: true # gather_facts: false tasks: - name: Get join command shell: kubeadm token create --print-join-command register: join_command_raw - name: Set join command set_fact: join_command: "{{ join_command_raw.stdout_lines[0] }}" - hosts: sql become: true tasks: - name: TCP port 6443 on master is reachable from worker wait_for: "host={{ 'MASTERIP' }} port=6443 timeout=1" - name: Join cluster shell: "{{ hostvars['MASTERIP'].join_command }} >> node_joined.log" args: chdir: /home/ansible creates: node_joined.log
Run the playbook: ansible-playbook sql.yml
- SSH to the master and verify all the nodes return status
Ready
ssh controller kubectl get nodes
Set up the Worker Nodes
Modify the file to replace MASTERIP with the IP address of your master node in two (2) places
- workers.yml
--- - hosts: master become: true # gather_facts: false tasks: - name: Get join command shell: kubeadm token create --print-join-command register: join_command_raw - name: Set join command set_fact: join_command: "{{ join_command_raw.stdout_lines[0] }}" - hosts: workers become: true tasks: - name: TCP port 6443 on master is reachable from worker wait_for: "host={{ 'MASTERIP' }} port=6443 timeout=1" - name: Join cluster shell: "{{ hostvars['MASTERIP'].join_command }} >> node_joined.log" args: chdir: /home/ansible creates: node_joined.log
Run the playbook: ansible-playbook workers.yml
- SSH to the master and verify all the nodes return status
Ready
ssh controller kubectl get nodes
Install kubectl on the Host
Install kubectl on the host (the Ansible controller) to allow for automation with Kubernetes.
create file /home/ansible/kubectlcontrolnode.yml
- kubectlcontrolnode.yml
--- - hosts: localhost become: true gather_facts: false tasks: - name: Update APT packages apt: pkg: - python3-pip update_cache: true - name: Add Kubernetes apt-key apt_key: url: https://packages.cloud.google.com/apt/doc/apt-key.gpg state: present - name: Add Kubernetes' APT repository apt_repository: repo: deb https://apt.kubernetes.io/ kubernetes-xenial main state: present filename: 'kubernetes' - name: Install Kubectl apt: name: kubectl=1.26.* state: present force: true # allow downgrades - name: install pre-requisites pip: name: - openshift - pyyaml - kubernetes
Run the playbook and enter the password for the user ansible when prompted.
ansible-playbook kubectlcontrolnode.yml --ask-become
Running kubectl version
will fail at this point because you don't have credentials.
Copy credentials
scp -r controller:/home/ansible/.kube ~/
Confirm it's working now by running
kubectl version
kubectl get nodes
Apply Labels to the Nodes
If you want to experiment with manually applying a label
- kubectl label nodes node1 my-role=sql
- kubectl get nodes –show-labels
- kubectl describe nodes node1
- kubectl label nodes controller my-role- (removes the label)
The following playbook will apply the appropriate labels to the nodes.
- labels.yml
--- - hosts: localhost # Run the task on the control machine connection: local # Use local connection, as we're not connecting to remote hosts tasks: - name: Label sql node k8s: state: present # Ensure the label is present definition: apiVersion: v1 kind: Node metadata: name: node1 labels: my-role: sql - name: Label worker node k8s: state: present # Ensure the label is present definition: apiVersion: v1 kind: Node metadata: name: "{{ item }}" labels: my-role: worker loop: - node2 - node3
List nodes with labels
kubectl get nodes --show-labels
Or, kubectl describe nodes node1
Next Step
Continue to Step 4 - MySQL Server
Or back to Step 2 - Deploy the VMs or Start.