MNAIO: Switch to using file-backed VM's only

The MNAIO tooling is a test system, and with that target
we can afford to be opinionated in the implementation
rather than try to cater to every possibility.

Now that the file-backed VM implementation has matured,
we can switch to using it exclusively. This cuts down on
code complexity and allows us to mature the implementation
further with more options without having to cater to two
options.

Change-Id: Ibe04b5676a392301cd79a5d290b77df4c7d9f79a
This commit is contained in:
Jesse Pretorius 2018-10-09 20:42:07 +01:00
parent 06fca4a2f6
commit aee8b6d910
5 changed files with 68 additions and 242 deletions

View File

@ -114,7 +114,6 @@ Set to instruct the preseed what the default network is expected to be:
Set the VM disk size in gigabytes:
``VM_DISK_SIZE="${VM_DISK_SIZE:-252}"``
Instruct the system do all of the required host setup:
``SETUP_HOST=${SETUP_HOST:-true}``
@ -203,20 +202,9 @@ Instruct the system to use a customized iPXE script during boot of VMs:
Re-kicking VM(s)
----------------
Re-kicking a VM is as simple as stopping a VM, delete the logical volume, create
a new logical volume, start the VM. The VM will come back online, pxe boot, and
install the base OS.
.. code-block:: bash
virsh destroy "${VM_NAME}"
lvremove "/dev/mapper/vg01--${VM_NAME}"
lvcreate -L 60G vg01 -n "${VM_NAME}"
virsh start "${VM_NAME}"
To rekick all VMs, simply re-execute the ``deploy-vms.yml`` playbook and it will
do it automatically.
To re-kick all VMs, simply re-execute the ``deploy-vms.yml`` playbook and it
will do it automatically. The ansible ``--limit`` parameter may be used to
selectively re-kick a specific VM.
.. code-block:: bash
@ -267,47 +255,37 @@ command or the following bash loop to restore everything to a known point.
virsh snapshot-revert --snapshotname $instance-kilo-snap --running $instance
done
Using a file-based backing store with thin-provisioned VM's
-----------------------------------------------------------
Saving VM images for re-use on another host
-------------------------------------------
If you wish to use a file-based backing store (instead of the default LVM-based
backing store) for the VM's, then set the following option before executing
``build.sh``.
.. code-block:: bash
export MNAIO_ANSIBLE_PARAMETERS="-e default_vm_disk_mode=file"
./build.sh
If you wish to save the current file-based images in order to implement a
thin-provisioned set of VM's which can be saved and re-used, then use the
``save-vms.yml`` playbook. This will stop the VM's and rename the files to
``*-base.img``. Re-executing the ``deploy-vms.yml`` playbook afterwards will
rebuild the VMs from those images.
If you wish to save the current images in order to implement a thin-provisioned
set of VM's which can be saved and re-used, then use the ``save-vms.yml``
playbook. This will stop the VM's and rename the files to ``*-base.img``.
Re-executing the ``deploy-vms.yml`` playbook afterwards will rebuild the VMs
from those images.
.. code-block:: bash
ansible-playbook -i playbooks/inventory playbooks/save-vms.yml
ansible-playbook -i playbooks/inventory -e default_vm_disk_mode=file playbooks/deploy-vms.yml
ansible-playbook -i playbooks/inventory playbooks/deploy-vms.yml
To disable this default functionality when re-running ``build.sh`` set the
build not to use the snapshots as follows.
build not to use the images as follows.
.. code-block:: bash
export MNAIO_ANSIBLE_PARAMETERS="-e default_vm_disk_mode=file -e vm_use_snapshot=no"
export MNAIO_ANSIBLE_PARAMETERS="-e vm_use_snapshot=no"
./build.sh
If you have previously saved some file-backed images to remote storage then,
if they are available via a URL, they can be downloaded and used on a fresh
host as follows.
If you have previously saved some images to remote storage then, if they are
available via a URL, they can be downloaded and used on a fresh host as follows.
.. code-block:: bash
# First prepare the host and get the base services started
./bootstrap.sh
source ansible-env.rc
export ANSIBLE_PARAMETERS="-i playbooks/inventory -e default_vm_disk_mode=file"
export ANSIBLE_PARAMETERS="-i playbooks/inventory"
ansible-playbook ${ANSIBLE_PARAMETERS} playbooks/setup-host.yml
ansible-playbook ${ANSIBLE_PARAMETERS} playbooks/deploy-acng.yml playbooks/deploy-pxe.yml playbooks/deploy-dhcp.yml

View File

@ -41,15 +41,6 @@
failed_when: false
with_items: "{{ _virt_list.list_vms }}"
- name: Delete any LV's related to running VM's
lvol:
vg: "{{ default_vm_disk_vg }}"
lv: "{{ item }}"
state: absent
force: yes
failed_when: false
with_items: "{{ _virt_list.list_vms }}"
- name: Delete any disk images related to running VM's
file:
path: "{{ _virt_pools.pools.default.path | default('/data/images') }}/{{ item }}.img"
@ -63,27 +54,23 @@
failed_when: false
with_items: "{{ _virt_list.list_vms }}"
- name: Setup/clean-up file-based disk images
- name: Find existing base image files
find:
paths: "{{ _virt_pools.pools.default.path | default('/data/images') }}"
patterns: '*-base.img'
register: _base_images
- name: Enable/disable vm_use_snapshot based on whether there are base image files
set_fact:
vm_use_snapshot: "{{ _base_images['matched'] > 0 }}"
- name: Clean up base image files if they are not being used
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ _base_images.files }}"
when:
- default_vm_disk_mode == "file"
block:
- name: Find existing base image files
find:
paths: "{{ _virt_pools.pools.default.path | default('/data/images') }}"
patterns: '*-base.img'
register: _base_images
- name: Enable/disable vm_use_snapshot based on whether there are base image files
set_fact:
vm_use_snapshot: "{{ _base_images['matched'] > 0 }}"
- name: Clean up base image files if they are not being used
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ _base_images.files }}"
when:
- not (vm_use_snapshot | bool)
- not (vm_use_snapshot | bool)
- name: Prepare VM storage
@ -93,17 +80,6 @@
tags:
- deploy-vms
tasks:
- name: Create VM LV
lvol:
vg: "{{ default_vm_disk_vg }}"
lv: "{{ server_hostname }}"
size: "{{ default_vm_storage }}"
when:
- server_vm | default(false) | bool
- default_vm_disk_mode == "lvm"
delegate_to: "{{ item }}"
with_items: "{{ groups['vm_hosts'] }}"
- name: Create VM Disk Image
command: >-
qemu-img create
@ -115,7 +91,6 @@
{{ default_vm_storage }}m
when:
- server_vm | default(false) | bool
- default_vm_disk_mode == "file"
delegate_to: "{{ item }}"
with_items: "{{ groups['vm_hosts'] }}"
@ -134,7 +109,6 @@
# ref: https://bugs.launchpad.net/ubuntu/+source/libguestfs/+bug/1615337.
- name: Prepare file-based disk images
when:
- default_vm_disk_mode == "file"
- vm_use_snapshot | bool
block:
- name: Inject the host ssh key into the VM disk image

View File

@ -15,8 +15,6 @@ default_interface: "{{ default_network | default('eth0') }}"
default_vm_image: "{{ default_image | default('ubuntu-16.04-amd64') }}"
default_vm_storage: "{{ vm_disk_size | default(92160) }}"
default_vm_root_disk_size: 8192
default_vm_disk_mode: lvm
default_vm_disk_vg: vg01
default_acng_bind_address: 0.0.0.0
default_os_families:
ubuntu-16.04-amd64: debian
@ -37,7 +35,7 @@ ipxe_kernel_base_url: "http://boot.ipxe.org"
vm_ssh_timeout: 1500
# Whether to use snapshots (if they are available) for file-backed VM's
vm_use_snapshot: "{{ default_vm_disk_mode == 'file' }}"
vm_use_snapshot: yes
# IP address, or domain name of the TFTP server
tftp_server: "{{ hostvars[groups['pxe_hosts'][0]]['ansible_host'] | default(ansible_host) }}"

View File

@ -34,15 +34,9 @@
</pm>
<devices>
<emulator>/usr/bin/kvm-spice</emulator>
{% if default_vm_disk_mode == "lvm" %}
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/{{ default_vm_disk_vg }}/{{ server_hostname }}'/>
{% elif default_vm_disk_mode == "file" %}
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' discard='unmap' cache='none' io='native'/>
<source file='{{ hostvars[item]['virt_pools'].pools.default.path | default('/data/images') }}/{{ server_hostname }}.img'/>
{% endif %}
<target dev='sda' bus='scsi'/>
<alias name='scsi0-0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>

View File

@ -326,95 +326,6 @@
when:
- mnaio_data_disk is undefined
- name: Get info about existing virt storage pools
virt_pool:
command: info
register: _virt_pools
- name: If an existing virt pool does not match default_vm_disk_mode, remove it
when:
- _virt_pools.pools.default is defined
- (default_vm_disk_mode == "file" and _virt_pools.pools.default.format is defined) or
(default_vm_disk_mode == "lvm" and _virt_pools.pools.default.format is not defined)
block:
- name: Stop running VMs
virt:
name: "{{ item }}"
command: destroy
failed_when: false
with_items: "{{ _virt_pools.pools.default.volumes }}"
- name: Delete VM LVs
lvol:
vg: "{{ default_vm_disk_vg }}"
lv: "{{ item }}"
state: absent
force: yes
failed_when: false
with_items: "{{ _virt_pools.pools.default.volumes }}"
- name: Delete VM Disk Images
file:
path: "{{ _virt_pools.pools.default.path | default('/data/images') }}/{{ item }}.img"
state: absent
with_items: "{{ _virt_pools.pools.default.volumes }}"
- name: Undefine the VMs
virt:
name: "{{ item }}"
command: undefine
failed_when: false
with_items: "{{ _virt_pools.pools.default.volumes }}"
- name: Dismount the mount point if default_vm_disk_mode is 'lvm'
mount:
path: /data
state: unmounted
when:
- default_vm_disk_mode == "lvm"
- name: Stop the pool
virt_pool:
command: destroy
name: default
- name: Delete the pool, destroying its contents
virt_pool:
command: delete
name: default
- name: Undefine the pool
virt_pool:
command: undefine
name: default
- name: Remove the mount point if default_vm_disk_mode is 'lvm'
mount:
path: /data
state: absent
when:
- default_vm_disk_mode == "lvm"
- name: Reload systemd to remove generated unit files for mount
systemd:
daemon_reload: yes
when:
- default_vm_disk_mode == "lvm"
- name: Remove the volume group if default_vm_disk_mode is 'file'
lvg:
vg: vg01
state: absent
register: _remove_vg
when:
- default_vm_disk_mode == "file"
- name: Remove the existing disk partition
parted:
device: "/dev/{{ mnaio_data_disk | default(lsblk.stdout) }}"
number: 1
state: absent
- name: Setup the data disk partition
parted:
device: "/dev/{{ mnaio_data_disk | default(lsblk.stdout) }}"
@ -424,74 +335,45 @@
state: present
register: _add_partition
- name: Prepare the data disk for 'lvm' default_vm_disk_mode
- name: Prepare the data disk file system
filesystem:
fstype: ext4
dev: "/dev/{{ mnaio_data_disk | default(lsblk.stdout) }}1"
force: yes
when:
- default_vm_disk_mode == "lvm"
block:
- name: Create the volume group
lvg:
vg: vg01
pvs: "/dev/{{ mnaio_data_disk | default(lsblk.stdout) }}1"
- _add_partition is changed
- name: Define the default virt storage pool
virt_pool:
name: default
state: present
xml: |
<pool type='logical'>
<name>default</name>
<source>
<name>vg01</name>
<format type='lvm2'/>
</source>
<target>
<path>/dev/vg01</path>
</target>
</pool>
- name: Mount the data disk
mount:
src: "/dev/{{ mnaio_data_disk | default(lsblk.stdout) }}1"
path: /data
state: mounted
fstype: ext4
- name: Prepare the data disk for 'file' default_vm_disk_mode
when:
- default_vm_disk_mode == "file"
block:
- name: Prepare the data disk file system
filesystem:
fstype: ext4
dev: "/dev/{{ mnaio_data_disk | default(lsblk.stdout) }}1"
force: yes
when:
- _add_partition is changed
- name: Create the images directory
file:
path: /data/images
owner: root
group: root
mode: "0755"
state: directory
- name: Mount the data disk
mount:
src: "/dev/{{ mnaio_data_disk | default(lsblk.stdout) }}1"
path: /data
state: mounted
fstype: ext4
- name: Create the images directory
file:
path: /data/images
owner: root
group: root
mode: "0755"
state: directory
- name: Define the default virt storage pool
virt_pool:
name: default
state: present
xml: |
<pool type='dir'>
<name>default</name>
<target>
<path>/data/images</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
- name: Define the default virt storage pool
virt_pool:
name: default
state: present
xml: |
<pool type='dir'>
<name>default</name>
<target>
<path>/data/images</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
- name: Set default virt storage pool to active
virt_pool: