This is an old revision of the document!
Table of Contents
Step 2 - Deploy the VMs
Create Customized Template for Ubuntu Autoinstall
A custom “user-data” file is required for an unattended installation of Ubuntu. We will create this template file for use in our Ansible playbook to Generate Custom Unattended Ubuntu Install ISO.
Save the file and customize it as you need.
- user-data.j2
#cloud-config autoinstall: version: 1 ssh: install-server: true # option "allow-pw" defaults to `true` if authorized_keys is empty, `false` otherwise. allow-pw: false storage: layout: name: lvm match: size: largest user-data: disable_root: true timezone: America/New_York package_upgrade: true packages: - network-manager - lldpd - git - python3-pip - ansible - arp-scan users: - name: {{ Global.username }} primary_group: users groups: sudo lock_passwd: true shell: /bin/bash ssh_authorized_keys: - "{{ Global.ssh_key }}" sudo: ALL=(ALL) NOPASSWD:ALL ansible: install_method: pip package_name: ansible #run_user: ansible galaxy: actions: - ["ansible-galaxy", "collection", "install", "community.general"] late-commands: # randomly generate the hostname & show the IP at boot - echo "{{ Global.system_name_prefix }}-$(openssl rand -hex 3)" > /target/etc/hostname # dump the IP out at login screen - echo "Ubuntu 22.04 LTS \nIP - $(hostname -I)\n" > /target/etc/issue
Create the variables.yml file
Create a new variables.yml file in the home directory (you are the user ansible now, so it's in /home/ansible).
Modify the file as follows:
- Replace the ssh key with the one you saved earlier
- Replace the bridge_interface_name value with the interface of your host machine
- Ex., run
ip a
and you will find the list of interfaces and IP addresses
- variables.yml
--- Global: bridge_interface_name: "enp8s0" username: ansible ssh_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCIlNJecNoDFTVdVqDRGYjxH22Ih1qq5VPhMtRrrECa8BwpICxClbYh0XcLzijaae1sI0j+jBo/BHXyrER010NEyq18ZWNI+NUvg+wF77CfPBFxwla3Xsel8rBgu47RMn4W5g1KHrw9Sy0OzfljKBlXuuRwrzHyT6qoi4Iu3ltySrPj1zMCbybiQCI0A77T6GQpBbxMZBJJNYfAXchNnxLIAW8Q2taXJ+JYbHg0mtL/lp35vwRNgUFOwm8l7YiehlcTybEHGoN+aI/0AWkl1tvEaol8sluKba01hlM4n1JWI+BQewu69l0Lqnoiiob+WvqchjpEw/QwDo2HaB5YAZCNX89QO4DHLyg6KTvsfvQhQ70HcXGfQTatqXlBkKa83ny4Xn3doA036Z+CbpZVbzYF0swpjEXj/GXvZzxILkp2HF2auIjT/HPX5Jecq/VBDqDRbPKyyJjdM4pRiHcCel2JLPzwLhMJHnXKPtiCppxIyrL3PXgh1oY/fhMDN2Bo6sM= autobox@autobox" workingdir: "{{ lookup('env','HOME') }}/ubuntu-autoinstall" inventory_file: "{{ lookup('env','HOME') }}/inventory" vboxmanage_path: /usr/bin/vboxmanage ubuntu_iso: https://cdimage.ubuntu.com/ubuntu-server/jammy/daily-live/current/jammy-live-server-amd64.iso ubuntu_iso_filename: jammy-live-server-amd64.iso new_iso_filename: ubuntu-22.04-autoinstall.iso
Create the servers.yml file
For this level of automation, we need to know the IP addresses of the servers. Therefore instead of relying on DHCP, we will build the servers with static IP addresses.
These static IP addresses:
- are on the same subnet as your host machine
- need to be unique (no conflicts); assign IP addresses that are not in your router's DHCP scope/range
- This is Linux, so static IP addresses are NOT in the DHCP scope; Windows fixed IP addresses are in the scope range
- are expressed in CIDR notation for the sake of the autoinstaller
- Ex. 192.168.1.25 with the subnet 255.255.255.0 = 192.168.1.25
Since we are doing static IP addresses you will also need to provide:
- IPv4Gateway: the IP address of your router
- on your host computer run
route -n
from Terminal/command line to set the gateway IP
- IPv4DNS: a DNS server
- on your host computer run
nmcli device show
- shows the IP4.DNS[1] address and the IP4.GATEWAY address
- you can always configure the Google DNS IP 8.8.8.8 here
The Search domain should match the domain you configured on your router, if any. Or use lablocal
for a safe value.
You will also specify VM resources for each server:
- DiskSize in MB (10240 = 102240 MB = 10GB)
- MemorySize in MB (1024 = 1GB)
- CPUs in number of virtual cores
You will also enter the Name (VM name), Hostname (VM's OS hostname), local sudoer username and password.
In the following example the Lab router (192.168.99.254) provides a DNS resolver to clients.
Lab server list
- 1x Controller
- 2 CPU cores
- 2 GB RAM
- 60 GB storage
- 1x SQL server node
- 2 CPU cores
- 2 GB RAM
- 250 GB storage
- 2x app nodes
- 1 CPU core
- 2 GB RAM
- 60 GB storage
This consumes 6 of the 8 cores in the NUC host, 8GB of RAM and <500GB storage.
If you have more cores give 2 cores to each App node and add another App node for 3 total (10 cores + 2 overhead = 12 cores at least).
- server.yml
--- Server_List: - Name: controller Deploy: true Configuration: Storage: DiskSize: 61440 Compute: MemorySize: 2048 CPUs: 2 OS: User: ubuntu Password: "virtualbox1!" Hostname: controller IPv4Address: 192.168.99.201/24 IPv4Gateway: 192.168.99.254 IPv4DNS: 192.168.99.254 SearchDomain: lablocal - Name: node1 Deploy: true Configuration: Storage: DiskSize: 256000 Compute: MemorySize: 2048 CPUs: 2 OS: User: ubuntu Password: "virtualbox1!" Hostname: node1 IPv4Address: 192.168.99.202/24 IPv4Gateway: 192.168.99.254 IPv4DNS: 192.168.99.254 SearchDomain: lablocal - Name: node2 Deploy: true Configuration: Storage: DiskSize: 61440 Compute: MemorySize: 2048 CPUs: 1 OS: User: ubuntu Password: "virtualbox1!" Hostname: node2 IPv4Address: 192.168.99.203/24 IPv4Gateway: 192.168.99.254 IPv4DNS: 192.168.99.254 SearchDomain: lablocal - Name: node3 Deploy: true Configuration: Storage: DiskSize: 61440 Compute: MemorySize: 2048 CPUs: 1 OS: User: ubuntu Password: "virtualbox1!" Hostname: node3 IPv4Address: 192.168.99.204/24 IPv4Gateway: 192.168.99.254 IPv4DNS: 192.168.99.254 SearchDomain: lablocal
Create the fleet-user-data.j2 file
Next create the jinja (j2) template used to create the user-data file for each server's automatic installer ISO image. You can customize this (e.g., the timezone).
- fleet-user-data.j2
#cloud-config autoinstall: version: 1 ssh: install-server: true allow-pw: false storage: layout: name: lvm match: size: largest network: network: version: 2 ethernets: zz-all-en: match: name: "en*" dhcp4: no addresses: [{{ item.Configuration.OS.IPv4Address }}] gateway4: {{ item.Configuration.OS.IPv4Gateway }} nameservers: addresses: [{{ item.Configuration.OS.IPv4DNS }}] user-data: disable_root: true timezone: America/New_York package_upgrade: true packages: - network-manager - lldpd - git - python3-pip - ansible - arp-scan users: - name: {{ Global.username }} primary_group: users groups: sudo lock_passwd: true shell: /bin/bash ssh_authorized_keys: - "{{ Global.ssh_key }}" sudo: ALL=(ALL) NOPASSWD:ALL ansible: install_method: pip package_name: ansible galaxy: actions: - ["ansible-galaxy", "collection", "install", "community.general"] late-commands: - echo "{{ item.Configuration.OS.Hostname }}" > /target/etc/hostname - echo "Ubuntu 22.04 LTS \nIP - $(hostname -I)\n" > /target/etc/issue
Create the Playbook to Deploy the VMs in VirtualBox while Managed by Ansible
The next playbook is the one that will do all the work.
Overview:
- set up working directory
- download the Ubuntu 20.20 server ISO (this may take some time depending on the Internet connection)
- create a customer bootable ISO for each server
- create a VM for each server with the required resources
- power on the new VMs in headless more
- add the static IP addresses assigned to the VMs to the inventory file inventory
- wait for the servers to boot and be configured, and finally come online
- add the ssh keys to the known_hosts file to enable seamless control using Ansible
- build_fleet.yml
--- - hosts: localhost # Run actions on the local machine name: build_fleet.yml connection: local gather_facts: false vars_files: - variables.yml - servers.yml tasks: - name: Create working directory file: path: "{{ Global.workingdir }}" state: directory mode: "755" - name: Download the latest ISO get_url: url: "{{ Global.ubuntu_iso }}" dest: "{{ Global.workingdir }}/jammy-live-server-amd64.iso" force: false - name: Create source files directory file: path: "{{ Global.workingdir }}/{{ item.Name }}/source-files" state: directory mode: "755" loop: "{{ Server_List }}" when: item.Deploy - name: Extract ISO command: "7z -y x {{ Global.workingdir }}/{{ Global.ubuntu_iso_filename }} -o{{ Global.workingdir }}/{{ item.Name }}/source-files" changed_when: false when: item.Deploy loop: "{{ Server_List }}" - name: Add write permissions to extracted files command: "chmod -R +w {{ Global.workingdir }}/{{ item.Name }}/source-files" # Using chmod as Ansible (Python) can't handle the recursion depth on the Ubuntu ISO changed_when: false when: item.Deploy loop: "{{ Server_List }}" ## Start workaround issue with Ubuntu autoinstall ## Details of the issue and the workaround: https://askubuntu.com/questions/1394441/ubuntu-20-04-3-autoinstall-with-embedded-user-data-crashing-i-got-workaround - name: Extract the Packages.gz file on Ubuntu ISO command: "gunzip -f {{ Global.workingdir }}/{{ item.Name }}/source-files/dists/jammy/main/binary-amd64/Packages.gz --keep" changed_when: false ## End workaround issue with Ubuntu autoinstall when: item.Deploy loop: "{{ Server_List }}" - name: Rename [BOOT] directory command: "mv {{ Global.workingdir }}/{{ item.Name }}/source-files/'[BOOT]' {{ Global.workingdir }}/{{ item.Name}}/BOOT" changed_when: false when: item.Deploy loop: "{{ Server_List }}" - name: Edit grub.cfg to modify menu blockinfile: path: "{{ Global.workingdir }}/{{ item.Name }}/source-files/boot/grub/grub.cfg" create: true block: | menuentry "Autoinstall Ubuntu Server" { set gfxpayload=keep linux /casper/vmlinuz quiet autoinstall ds=nocloud\;s=/cdrom/server/ --- initrd /casper/initrd } insertbefore: 'menuentry "Try or Install Ubuntu Server" {' state: present when: item.Deploy loop: "{{ Server_List }}" - name: Edit grub.cfg to set timeout to 1 second replace: path: "{{ Global.workingdir }}/{{ item.Name }}/source-files/boot/grub/grub.cfg" regexp: '^(set timeout=30)$' replace: 'set timeout=5' when: item.Deploy loop: "{{ Server_List }}" - name: Create directory to store user-data and meta-data file: path: "{{ Global.workingdir }}/{{ item.Name }}/source-files/server" state: directory mode: "755" when: item.Deploy loop: "{{ Server_List }}" - name: Create empty meta-data file in directory file: path: "{{ Global.workingdir }}/{{ item.Name }}/source-files/server/meta-data" state: touch mode: "755" when: item.Deploy loop: "{{ Server_List }}" - name: Copy user-data file to directory using template template: src: ./fleet-user-data.j2 dest: "{{ Global.workingdir }}/{{ item.Name }}/source-files/server/user-data" mode: "755" when: item.Deploy loop: "{{ Server_List }}" - name: Create custom ISO command: "xorriso -as mkisofs -r \ -V 'Ubuntu 22.04 LTS AUTO (EFIBIOS)' \ -o {{ Global.workingdir }}/{{ item.Name }}/{{ Global.new_iso_filename }} \ --grub2-mbr ../BOOT/1-Boot-NoEmul.img \ -partition_offset 16 \ --mbr-force-bootable \ -append_partition 2 28732ac11ff8d211ba4b00a0c93ec93b ../BOOT/2-Boot-NoEmul.img \ -appended_part_as_gpt \ -iso_mbr_part_type a2a0d0ebe5b9334487c068b6b72699c7 \ -c '/boot.catalog' \ -b '/boot/grub/i386-pc/eltorito.img' \ -no-emul-boot -boot-load-size 4 -boot-info-table --grub2-boot-info \ -eltorito-alt-boot \ -e '--interval:appended_partition_2:::' \ -no-emul-boot \ ." args: chdir: "{{ Global.workingdir }}/{{ item.Name }}/source-files/" changed_when: false when: item.Deploy loop: "{{ Server_List }}" - name: Remove BOOT directory file: path: "{{ Global.workingdir }}/{{ item.Name }}/BOOT" state: absent when: item.Deploy loop: "{{ Server_List }}" - name: Delete source files file: path: "{{ Global.workingdir }}/{{ item.Name }}/source-files" state: absent when: item.Deploy loop: "{{ Server_List }}" - name: Create VM command: "{{ Global.vboxmanage_path }} createvm --name {{ item.Name }} --ostype Ubuntu_64 --register" when: item.Deploy loop: "{{ Server_List }}" - name: Create VM storage command: "{{ Global.vboxmanage_path }} createmedium disk --filename {{ item.Name }}.vdi --size {{ item.Configuration.Storage.DiskSize }} --format=VDI" when: item.Deploy loop: "{{ Server_List }}" - name: Add IDE controller command: "{{ Global.vboxmanage_path }} storagectl {{ item.Name }} --name IDE --add IDE --controller PIIX4" when: item.Deploy loop: "{{ Server_List }}" - name: Attach DVD drive command: "{{ Global.vboxmanage_path }} storageattach {{ item.Name }} --storagectl IDE --port 0 --device 0 --type dvddrive --medium {{ Global.workingdir }}/{{ item.Name }}/{{ Global.new_iso_filename }}" when: item.Deploy loop: "{{ Server_List }}" - name: Add SATA controller command: "{{ Global.vboxmanage_path }} storagectl {{ item.Name }} --name SATA --add SAS --controller LsiLogicSas" when: item.Deploy loop: "{{ Server_List }}" - name: Attach drive command: "{{ Global.vboxmanage_path }} storageattach {{ item.Name }} --storagectl SATA --port 0 --device 0 --type hdd --medium {{ item.Name }}.vdi" when: item.Deploy loop: "{{ Server_List }}" - name: Boot order command: "{{ Global.vboxmanage_path }} modifyvm {{ item.Name }} --boot1 disk --boot2 DVD --boot3 none --boot4 none" when: item.Deploy loop: "{{ Server_List }}" - name: Set VM CPU, RAM, video RAM command: "{{ Global.vboxmanage_path }} modifyvm {{ item.Name }} --cpus {{ item.Configuration.Compute.CPUs }} --memory {{ item.Configuration.Compute.MemorySize }} --vram 16" when: item.Deploy loop: "{{ Server_List }}" - name: Settings 1 command: "{{ Global.vboxmanage_path }} modifyvm {{ item.Name}} --graphicscontroller vmsvga --hwvirtex on --nested-hw-virt on" when: item.Deploy loop: "{{ Server_List }}" - name: Settings 2 command: "{{ Global.vboxmanage_path }} modifyvm {{ item.Name}} --ioapic on --pae off --acpi on --paravirtprovider default" when: item.Deploy loop: "{{ Server_List }}" - name: Settings 3 command: "{{ Global.vboxmanage_path }} modifyvm {{ item.Name}} --nestedpaging on --keyboard ps2 --uart1 0x03F8 4" when: item.Deploy loop: "{{ Server_List }}" - name: Settings 4 command: "{{ Global.vboxmanage_path }} modifyvm {{ item.Name}} --uartmode1 disconnected --uarttype1 16550A --macaddress1 auto --cableconnected1 on" when: item.Deploy loop: "{{ Server_List }}" - name: Network adapter command: "{{ Global.vboxmanage_path }} modifyvm {{ item.Name }} --nic1 bridged --bridgeadapter1 {{ Global.bridge_interface_name }}" when: item.Deploy loop: "{{ Server_List }}" - name: Start the virtual machine command: "{{ Global.vboxmanage_path }} startvm {{ item.Name }} --type headless" when: item.Deploy loop: "{{ Server_List }}" - name: Add to inventory file lineinfile: path: "{{ Global.inventory_file }}" line: "{{ item.Configuration.OS.IPv4Address.split('/')[0] }}" create: true regexp: "^{{ item.Configuration.OS.IPv4Address.split('/')[0] }}$" when: item.Deploy loop: "{{ Server_List }}" - name: Wait for server availability on port 22 wait_for: port: 22 host: "{{ item.Configuration.OS.IPv4Address.split('/')[0] }}" state: started delay: 180 timeout: 600 when: item.Deploy loop: "{{ Server_List }}" - name: Make sure known_hosts exists file: path: "{{ lookup('env','HOME') }}/.ssh/known_hosts" state: touch - name: Add VM to known_hosts shell: ssh-keyscan -H {{ item.Configuration.OS.IPv4Address.split('/')[0] }} >> {{ lookup('env','HOME') }}/.ssh/known_hosts when: item.Deploy loop: "{{ Server_List }}"
Run the Playbook and Test the VMs
Run the script: ansible-playbook build_fleet.yml
Do a quick ansible ping:
- ansible -i inventory all -m ping
Configure the Servers
Now that the servers are built and online, we will configure the local user listed in servers.yml and update all packages. A common issue with Ubuntu 20.04 regarding DNS failed lookups will be fixed.
Overview
- Extend the disk partition(s) to use all of the available disk space
- Enable username & password login and add the local user specified in the
servers.yml
file - Update and upgrade all packages (rebooting as needed)
- Disable the DNS stub listener to prevent later issues with failed DNS lookups
- configure_servers.yml
--- - hosts: all name: configure_fleet.yml become: true vars_files: - variables.yml - servers.yml tasks: - name: Look up information by IP when: item.Configuration.OS.IPv4Address.split('/')[0] == inventory_hostname set_fact: matching_system: "{{ item }}" loop: "{{ Server_List }}" - name: Wait for server to be up wait_for: host: "{{ inventory_hostname }}" state: started port: 22 delay: 0 timeout: 60 - name: Extend logical volume command: lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv when: matching_system.Configuration.Storage.DiskSize > 20470 - name: Resize filesystem command: resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv when: matching_system.Configuration.Storage.DiskSize > 20470 - name: Add user user: name: "{{ matching_system.Configuration.OS.User }}" shell: /bin/bash home: "/home/{{ matching_system.Configuration.OS.User }}" password: "{{ matching_system.Configuration.OS.Password | password_hash('sha512') }}" groups: sudo append: true - name: Enable ssh password authentication step 1 lineinfile: path: /etc/ssh/sshd_config line: "PasswordAuthentication yes" state: present create: true - name: Enable ssh password authentication step 2 replace: path: /etc/ssh/sshd_config regexp: '^\s*#+PasswordAuthentication.*$' replace: "PasswordAuthentication yes" - name: Enable ssh password authentication step 3 replace: path: /etc/ssh/sshd_config regexp: '^\s*#*KbdInteractiveAuthentication.*$' replace: "KbdInteractiveAuthentication yes" - name: restart ssh service: name: ssh state: restarted - name: Update and upgrade all apt packages apt: update_cache=true force_apt_get=true state=latest - name: Check if reboot is required register: file stat: path=/var/run/reboot-required get_md5=no - name: Reboot the server if required reboot: reboot_timeout: 180 when: file.stat.exists == true - name: Disable DNS stub listener ini_file: dest=/etc/systemd/resolved.conf section=Resolve option=DNSStubListener value=no backup=yes tags: configuration - name: Restart NetworkManager systemd: name: NetworkManager state: restarted - name: Restart systemd-resolved systemd: name: systemd-resolved state: restarted - name: daemon-reload systemd: daemon_reload: true
Run the script: ansible-playbook -i inventory configure_fleet.yml
Test the Servers
Do a quick ansible ping:
ansible -i inventory all -m ping
Log in to servers and confirm everything is working with the correct user account & password, CPUs, storage, RAM
- ssh to the IP address of the server (password-less login as ansible using key)
- ssh to the IP address of the server as the user you specified in servers.yml
- Ex.
ssh myuser@192.168.99.201
- You should be prompted for the password
- Test sudo access for each user
- Check the amount of disk space:
df -h
- Check the amount of RAM:
free -h
- Check the number of CPUs:
grep processor /proc/cpuinfo | wc -l
Can you write a playbook to display this information?
Next Step
Proceed to Step 3 - Set up Kubernetes
Or back to Step 1 - Set up the Host or Start.