UncleNUC Wiki

Second chance for NUCs

User Tools

Site Tools


lab:ansible_virtualbox_autoboot_linux:deploy_an_application_to_our_fleet_of_vms

This is an old revision of the document!


Deploy an Application to our fleet of VMs

In this step will deploy a placeholder web application to the servers.

Deploy a Demonstration App

Create the application.yml file

This file contains settings to use when configuring Nginx.

application.yml
---
application:
  Name: test
  Root: /var/www/html
  http_port: 80

Create the jinja (j2) Template for Nginx

---
application:
  Name: test
  Root: /var/www/html
  http_port: 80
ansible@autobox:~$ cat app-conf.j2
server {
    listen {{ application.http_port }} default_server;
 
    server_name {{ application.Name }};
    root {{ application.Root }};
    index index.php index.html;
 
    location / {
        try_files $uri $uri/ /index.php?$args;
    }
 
    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_pass unix:/var/run/php/{{ php_fpm_version }}.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}

Deploy Nginx with PHP and Set up a Test App

application_fleet.yml
---
- hosts: all
  name: application_fleet.yml
  become: true
  gather_facts: true
  vars_files:
    - application.yml
  tasks:
    - name: Install Nginx
      apt:
        name: nginx
        state: present
        update_cache: true
    - name: Install PHP and extensions
      apt:
        name:
          - php
          - php-fpm # For FastCGI Process Manager
        state: present
    - name: Start and enable Nginx
      service:
        name: nginx
        state: started
        enabled: true
    - name: Get list of services
      service_facts:
    - name: get the php-fwm version
      set_fact:
        php_fpm_version: "{{ item }}"
      when:
        - ansible_facts.services[item].name | regex_search("^php.+fpm$")
      with_items: "{{ ansible_facts.services }}"
    - name: Start and enable PHP-FPM
      service:
        name: "{{ php_fpm_version }}"
        state: started
        enabled: true
    - name: Create a phpinfo page
      blockinfile:
        path: /var/www/html/index.php
        block: |
          <?php phpinfo(); ?>
        create: true
        marker: ""
    - name: Create application conf file
      template:
        src: ./app-conf.j2
        dest: "/etc/nginx/sites-available/{{ application.Name }}.conf"
        mode: "755"
    - name: Activate Nginx site by creating symlink
      file:
        src: "/etc/nginx/sites-available/{{ application.Name }}.conf"
        dest: "/etc/nginx/sites-enabled/{{ application.Name }}.conf"
        state: link
    - name: Deactivate default site
      file:
        path: /etc/nginx/sites-enabled/default
        state: absent
    - name: Reload Nginx
      service:
        name: nginx
        state: reloaded

Run playbook: ansible-playbook -i inventory application_fleet.yml

Test

Open each VM IP address in a web browser and confirm you see the standard phpinfo() page, similar to the following.

http://<IPADDRESS>

The following playbook will confirm a page with HTTP status 200 is returned (but not the contents)

check_fleet.yml
---
- hosts: all
  become: true
  tasks:
    - name: Check web server status on port 80
      uri:
        url: http://{{ inventory_hostname }}:80/
        follow_redirects: no
      register: web_status

    - name: Assert 200 HTTP status code
      assert:
        that: web_status.status == 200
        msg: "Expected HTTP 200 status code, but got {{ web_status.status }}. Please check web server health."

    - name: Print website response (optional)
      debug:
        msg: "Website response: {{ web_status.content }}"
      when: web_status_conent is defined and web_status.content != ''

Reboot the Servers 50% at a Time

This playbook reboots the fleet 50% at a time, waiting for the first group of servers to come up before rebooting the next group.

reboot-half.yml
---
- hosts: all
  name: reboot-half.yml
  become: true
  serial: "50%"
  tasks:
    - name: Reboot the servers
      reboot:
    - name: Wait for servers to come back online
      wait_for_connection:
        delay: 10
        connect_timeout: 120

Shut the Servers Down then Power On Two Ways

This first method use's the OS shutdown command.

shutdown_fleet.yml
---
- hosts: all
  become: true
  tasks:
    - name: Gracefully shut down server
      community.general.shutdown:

Run playbook: ansible-playbook -i inventory shutdown_fleet.yml

Sending a power off signal using VirtualBox is a second way to gracefully power off the servers.

power_off_fleet.yml
---
- hosts: localhost  # Run actions on the local machine
  name: destroy_fleet.yml
  connection: local
  gather_facts: false
  vars_files:
    - variables.yml
    - servers.yml
  tasks:
    - name: Shut down VM
      command: "{{ Global.vboxmanage_path }} controlvm {{ item.Name }} acpipowerbutton"
      ignore_errors: true
      when: item.Deploy
      loop: "{{ Server_List }}"

Run playbook: ansible-playbook power_off_fleet.yml

There is only one way to turn them back on, though.

start_fleet.yml
---
- hosts: localhost  # Run actions on the local machine
  name: start_fleet.yml
  connection: local
  gather_facts: false
  vars_files:
    - variables.yml
    - servers.yml
  tasks:
    - name: Start VM
      command: "{{ Global.vboxmanage_path }} startvm {{ item.Name }}"
      ignore_errors: true
      when: item.Deploy
      loop: "{{ Server_List }}"

Run playbook: ansible-playbook start_fleet.yml

Rebuild Specific Servers

Let's say one of the servers has a problem and we want to rebuild it.

Destroy Specific Servers

The simplest way is to modify the servers.yml file.

Set Deploy to false for the servers you don't want to touch, and leave Deploy as true for the ones you do. Then later you can set everything back to true. This is usually simpler than creating a new servers.yml file or modifying the scripts to use the new servers.yml file.

Run the playbook: ansible-playbook destroy_fleet.yml

Sometimes you will find that the server is down but still locked when the playbook goes to remove the VM. Re-running the playbook will remove it. How can you improve the playbook to reduce the likelihood of this happening?

What are the side effects of running the destroy_fleet.yml script this way? (Hint, what directories were removed? Instead of removing the entire working directory, how could you modify the destroy_fleet.yml playbook?)

Re-Create Specific Servers

With the modified servers.yml file is it simple enough to re-create the server.

Run ansible-playbook build_fleet.yml

Because the working directory was removed, the ISO is freshly downloaded. This means if a new release came out, your new server could be different from the rest of the VMs.

Reconfigure Specific Servers

Next we will run the configure_fleet.yml playbook on the specific server.

Run ansible-playbook -i inventory -l <IPADDRESS> configure_fleet.yml

Deploy the Application to Specific Servers

Use the same technique to deploy the application to the Run ansible-playbook -i inventory -l <IPADDRESS> application_fleet.yml

Test the Redeployed Server

Run ansible-playbook -i inventory -l <IPADDRESS> check_fleet.yml

Next Step

Continue to Tear Down the Lab

Or back to Deploy a Fleet of VMs

Optional

The process does it's job but could be a lot better.

Re-Applying Settings after Changing servers.yml

Because we are limited to running commands using vboxmanage instead of using a full integrated module, we can't modify VM resources to match an updated servers.yml. Rebuilding a server requires completely destroying it then re-creating it.

Fragile Playbook configure_fleet.yml

A well-behaved playbook would update the configuration as needed if re-run.

What happens when you re-run ansible-playbook -i inventory configure_fleet.yml?

What tasks could be moved to the build_fleet.yml playbook? Look at fleet-user-data.j2 file. What additional late-commands would you add?

Alternatively, what error handling could you do in configure_fleet.yml? Note that the playbook only resizes filesystems when they are larger than a set amount. This is because trying to resize the default partition is already 10GB and would create an error. How can you better handle all cases?

Fragile Playbook applications_fleet.yml

Similarly, applications_fleet.yml causes problems when it is run again.

What happens when you re-run ansible-playbook -i inventory application_fleet.yml?

Log in to a VM and examine /var/www/html/index.php. Do you see duplicates of the block if text the playbook uses?

There are certainly better ways to deploy applications, just as using git.

Run this playbook as ansible-playbook -i inventory git_fleet.yml –ask-become

git_fleet.yaml
---
- hosts: all
  become: true
  tasks:
    - name: Install Git
      apt:
        name: git
        state: present
        update_cache: true
    - name: Git clone project
      git:
        repo: https://github.com/doritoes/nuc-ansible-lab.git
        dest: "{{ lookup('env','HOME') }}/project"
        update: yes

How could you deploy an application to the /var/www/hmtl folder?

lab/ansible_virtualbox_autoboot_linux/deploy_an_application_to_our_fleet_of_vms.1706718543.txt.gz · Last modified: 2024/01/31 16:29 by user