From 6496cfc0badb361af3de595c1a4a3492af866e79 Mon Sep 17 00:00:00 2001 From: Scott Solkhon Date: Fri, 15 Mar 2019 14:54:02 +0000 Subject: [PATCH] Support for Ceph and Swift storage networks, and improvements to Swift In a deployment that has both Ceph or Swift deployed it can be useful to seperate the network traffic. This change adds support for dedicated storage networks for both Ceph and Swift. By default, the storage hosts are attached to the following networks: * Overcloud admin network * Internal network * Storage network * Storage management network This adds four additional networks, which can be used to seperate the storage network traffic as follows: * Ceph storage network (ceph_storage_net_name) is used to carry Ceph storage data traffic. Defaults to the storage network (storage_net_name). * Ceph storage management network (ceph_storage_mgmt_net_name) is used to carry storage management traffic. Defaults to the storage management network (storage_mgmt_net_name). * Swift storage network (swift_storage_net_name) is used to carry Swift storage data traffic. Defaults to the storage network (storage_net_name). * Swift storage replication network (swift_storage_replication_net_name) is used to carry storage management traffic. Defaults to the storage management network (storage_mgmt_net_name). This change also includes several improvements to Swift device management and ring generation. The device management and ring generation are now separate, with device management occurring during 'kayobe overcloud host configure', and ring generation during a new command, 'kayobe overcloud swift rings generate'. For the device management, we now use standard Ansible modules rather than commands for device preparation. File system labels can be configured for each device individually. For ring generation, all commands are run on a single host, by default a host in the Swift storage group. A python script runs in one of the kolla Swift containers, which consumes an autogenerated YAML config file that defines the layout of the rings. Change-Id: Iedc7535532d706f02d710de69b422abf2f6fe54c --- ansible/group_vars/all/ceph | 7 + ansible/group_vars/all/compute | 3 +- ansible/group_vars/all/controllers | 4 +- ansible/group_vars/all/network | 12 ++ ansible/group_vars/all/storage | 28 +++- ansible/group_vars/all/swift | 67 +++++++++ ansible/group_vars/controllers/swift | 16 --- ansible/kolla-ansible.yml | 20 +++ ansible/roles/kolla-ansible/defaults/main.yml | 8 ++ .../roles/kolla-ansible/tests/test-extras.yml | 14 ++ ansible/roles/kolla-openstack/vars/main.yml | 8 ++ .../swift-block-devices/defaults/main.yml | 11 ++ .../roles/swift-block-devices/tasks/main.yml | 72 ++++++++++ .../roles/swift-block-devices/tests/main.yml | 13 ++ .../tests/test-bootstrapped.yml | 68 +++++++++ .../tests/test-invalid-format.yml | 23 ++++ .../swift-block-devices/tests/test-mount.yml | 60 ++++++++ ansible/roles/swift-rings/defaults/main.yml | 49 +++++++ .../swift-rings/files/swift-ring-builder.py | 130 ++++++++++++++++++ ansible/roles/swift-rings/tasks/main.yml | 69 ++++++++++ .../swift-rings/templates/swift-ring.yml.j2 | 21 +++ .../vars/main.yml | 0 ansible/roles/swift-setup/defaults/main.yml | 34 ----- ansible/roles/swift-setup/tasks/devices.yml | 10 -- ansible/roles/swift-setup/tasks/main.yml | 3 - ansible/roles/swift-setup/tasks/rings.yml | 75 ---------- ansible/swift-block-devices.yml | 11 ++ ansible/swift-rings.yml | 35 +++++ ansible/swift-setup.yml | 13 -- doc/source/configuration/network.rst | 29 ++++ doc/source/deployment.rst | 12 ++ etc/kayobe/ceph.yml | 7 + .../group_vars/compute/network-interfaces | 5 + .../group_vars/controllers/network-interfaces | 10 ++ .../group_vars/storage/network-interfaces | 47 +++++++ etc/kayobe/networks.yml | 12 ++ etc/kayobe/storage.yml | 10 ++ etc/kayobe/swift.yml | 55 +++++++- kayobe/cli/commands.py | 20 ++- kayobe/tests/unit/cli/test_commands.py | 22 +++ ...ate-storage-networks-a659bcd30dd70665.yaml | 17 +++ .../swift-improvements-07a2b75967f642e8.yaml | 15 ++ setup.cfg | 1 + 43 files changed, 983 insertions(+), 163 deletions(-) create mode 100644 ansible/group_vars/all/ceph create mode 100644 ansible/group_vars/all/swift delete mode 100644 ansible/group_vars/controllers/swift create mode 100644 ansible/roles/swift-block-devices/defaults/main.yml create mode 100644 ansible/roles/swift-block-devices/tasks/main.yml create mode 100644 ansible/roles/swift-block-devices/tests/main.yml create mode 100644 ansible/roles/swift-block-devices/tests/test-bootstrapped.yml create mode 100644 ansible/roles/swift-block-devices/tests/test-invalid-format.yml create mode 100644 ansible/roles/swift-block-devices/tests/test-mount.yml create mode 100644 ansible/roles/swift-rings/defaults/main.yml create mode 100644 ansible/roles/swift-rings/files/swift-ring-builder.py create mode 100644 ansible/roles/swift-rings/tasks/main.yml create mode 100644 ansible/roles/swift-rings/templates/swift-ring.yml.j2 rename ansible/roles/{swift-setup => swift-rings}/vars/main.yml (100%) delete mode 100644 ansible/roles/swift-setup/defaults/main.yml delete mode 100644 ansible/roles/swift-setup/tasks/devices.yml delete mode 100644 ansible/roles/swift-setup/tasks/main.yml delete mode 100644 ansible/roles/swift-setup/tasks/rings.yml create mode 100644 ansible/swift-block-devices.yml create mode 100644 ansible/swift-rings.yml delete mode 100644 ansible/swift-setup.yml create mode 100644 etc/kayobe/ceph.yml create mode 100644 etc/kayobe/inventory/group_vars/storage/network-interfaces create mode 100644 releasenotes/notes/seperate-storage-networks-a659bcd30dd70665.yaml create mode 100644 releasenotes/notes/swift-improvements-07a2b75967f642e8.yaml diff --git a/ansible/group_vars/all/ceph b/ansible/group_vars/all/ceph new file mode 100644 index 000000000..cae8ff725 --- /dev/null +++ b/ansible/group_vars/all/ceph @@ -0,0 +1,7 @@ +--- +############################################################################### +# OpenStack Ceph configuration. + +# Ansible host pattern matching hosts on which Ceph storage services +# are deployed. The default is to use hosts in the 'storage' group. +ceph_hosts: "storage" diff --git a/ansible/group_vars/all/compute b/ansible/group_vars/all/compute index 273aabc46..ee6fc3511 100644 --- a/ansible/group_vars/all/compute +++ b/ansible/group_vars/all/compute @@ -19,8 +19,9 @@ compute_default_network_interfaces: > {{ ([admin_oc_net_name, internal_net_name, storage_net_name, + ceph_storage_net_name, tunnel_net_name] + - (external_net_names if kolla_enable_neutron_provider_networks | bool else [])) | unique | list }} + (external_net_names if kolla_enable_neutron_provider_networks | bool else [])) | reject('none') | unique | list }} # List of extra networks to which compute nodes are attached. compute_extra_network_interfaces: [] diff --git a/ansible/group_vars/all/controllers b/ansible/group_vars/all/controllers index 96137b045..07cc679ef 100644 --- a/ansible/group_vars/all/controllers +++ b/ansible/group_vars/all/controllers @@ -25,7 +25,9 @@ controller_default_network_interfaces: > internal_net_name, storage_net_name, storage_mgmt_net_name, - cleaning_net_name] | unique | list }} + ceph_storage_net_name, + swift_storage_net_name, + cleaning_net_name] | reject('none') | unique | list }} # List of extra networks to which controller nodes are attached. controller_extra_network_interfaces: [] diff --git a/ansible/group_vars/all/network b/ansible/group_vars/all/network index 0646b3cb1..ce531a26c 100644 --- a/ansible/group_vars/all/network +++ b/ansible/group_vars/all/network @@ -49,6 +49,18 @@ storage_net_name: 'storage_net' # Name of the network used to carry storage management traffic. storage_mgmt_net_name: 'storage_mgmt_net' +# Name of the network used to carry ceph storage data traffic. +ceph_storage_net_name: "{{ storage_net_name }}" + +# Name of the network used to carry ceph storage management traffic. +ceph_storage_mgmt_net_name: "{{ storage_mgmt_net_name }}" + +# Name of the network used to carry swift storage data traffic. +swift_storage_net_name: "{{ storage_net_name }}" + +# Name of the network used to carry swift storage replication traffic. +swift_storage_replication_net_name: "{{ storage_mgmt_net_name }}" + # Name of the network used to perform hardware introspection on the bare metal # workload hosts. inspection_net_name: 'inspection_net' diff --git a/ansible/group_vars/all/storage b/ansible/group_vars/all/storage index c1dce7668..368e0b24a 100644 --- a/ansible/group_vars/all/storage +++ b/ansible/group_vars/all/storage @@ -12,7 +12,15 @@ storage_bootstrap_user: "{{ lookup('env', 'USER') }}" # List of networks to which storage nodes are attached. storage_network_interfaces: > {{ (storage_default_network_interfaces + - storage_extra_network_interfaces) | unique | list }} + storage_extra_network_interfaces + + ([ceph_storage_net_name] + if storage_needs_ceph_network else []) + + ([ceph_storage_mgmt_net_name] + if storage_needs_ceph_mgmt_network else []) + + ([swift_storage_net_name] + if storage_needs_swift_network else []) + + ([swift_storage_replication_net_name] + if storage_needs_swift_replication_network else [])) | reject('none') | unique | list }} # List of default networks to which storage nodes are attached. storage_default_network_interfaces: > @@ -24,6 +32,24 @@ storage_default_network_interfaces: > # List of extra networks to which storage nodes are attached. storage_extra_network_interfaces: [] +# Whether this host requires access to Ceph networks. +storage_needs_ceph_network: >- + {{ kolla_enable_ceph | bool and + inventory_hostname in query('inventory_hostnames', ceph_hosts) }} + +storage_needs_ceph_mgmt_network: >- + {{ kolla_enable_ceph | bool and + inventory_hostname in query('inventory_hostnames', ceph_hosts) }} + +# Whether this host requires access to Swift networks. +storage_needs_swift_network: >- + {{ kolla_enable_swift | bool and + inventory_hostname in query('inventory_hostnames', swift_hosts) }} + +storage_needs_swift_replication_network: >- + {{ kolla_enable_swift | bool and + inventory_hostname in query('inventory_hostnames', swift_hosts) }} + ############################################################################### # Storage node BIOS configuration. diff --git a/ansible/group_vars/all/swift b/ansible/group_vars/all/swift new file mode 100644 index 000000000..00f7c5b6d --- /dev/null +++ b/ansible/group_vars/all/swift @@ -0,0 +1,67 @@ +--- +############################################################################### +# OpenStack Swift configuration. + +# Short name of the kolla container image used to build rings. Default is the +# swift=object image. +swift_ring_build_image_name: swift-object + +# Full name of the kolla container image used to build rings. +swift_ring_build_image: "{{ kolla_docker_registry ~ '/' if kolla_docker_registry else '' }}{{ kolla_docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-{{ swift_ring_build_image_name }}:{{ kolla_openstack_release }}" + +# Ansible host pattern matching hosts on which Swift object storage services +# are deployed. The default is to use hosts in the 'storage' group. +swift_hosts: "storage" + +# Name of the host used to build Swift rings. Default is the first host of +# 'swift_hosts'. +swift_ring_build_host: "{{ query('inventory_hostnames', swift_hosts)[0] }}" + +# ID of the Swift region for this host. Default is 1. +swift_region: 1 + +# ID of the Swift zone. This can be set to different values for different hosts +# to place them in different zones. Default is 0. +swift_zone: 0 + +# Base-2 logarithm of the number of partitions. +# i.e. num_partitions=2^. Default is 10. +swift_part_power: 10 + +# Object replication count. Default is the smaller of the number of Swift +# hosts, or 3. +swift_replication_count: "{{ [query('inventory_hostnames', swift_hosts) | length, 3] | min }}" + +# Minimum time in hours between moving a given partition. Default is 1. +swift_min_part_hours: 1 + +# Ports on which Swift services listen. Default is: +# object: 6000 +# account: 6001 +# container: 6002 +swift_service_ports: + object: 6000 + account: 6001 + container: 6002 + +# List of block devices to use for Swift. Each item is a dict with the +# following items: +# - 'device': Block device path. Required. +# - 'fs_label': Name of the label used to create the file system on the device. +# Optional. Default is to use the basename of the device. +# - 'services': List of services that will use this block device. Optional. +# Default is 'swift_block_device_default_services'. Allowed items are +# 'account', 'container', and 'object'. +# - 'weight': Weight of the block device. Optional. Default is +# 'swift_block_device_default_weight'. +swift_block_devices: [] + +# Default weight to assign to block devices in the ring. Default is 100. +swift_block_device_default_weight: 100 + +# Default list of services to assign block devices to. Allowed items are +# 'account', 'container', and 'object'. Default value is all of these. +swift_block_device_default_services: + - account + - container + - object diff --git a/ansible/group_vars/controllers/swift b/ansible/group_vars/controllers/swift deleted file mode 100644 index e80e6ab1e..000000000 --- a/ansible/group_vars/controllers/swift +++ /dev/null @@ -1,16 +0,0 @@ ---- -############################################################################### -# OpenStack Swift configuration. - -# Base-2 logarithm of the number of partitions. -# i.e. num_partitions=2^. -swift_part_power: 10 - -# Object replication count. -swift_replication_count: "{{ [groups['controllers'] | length, 3] | min }}" - -# Minimum time in hours between moving a given partition. -swift_min_part_hours: 1 - -# Number of Swift Zones. -swift_num_zones: 5 diff --git a/ansible/kolla-ansible.yml b/ansible/kolla-ansible.yml index e470a6c4e..580c823cd 100644 --- a/ansible/kolla-ansible.yml +++ b/ansible/kolla-ansible.yml @@ -28,6 +28,26 @@ kolla_cluster_interface: "{{ storage_mgmt_net_name | net_interface | replace('-', '_') }}" when: storage_mgmt_net_name in network_interfaces + - name: Set Ceph storage network interface + set_fact: + kolla_ceph_storage_interface: "{{ ceph_storage_net_name | net_interface | replace('-', '_') }}" + when: ceph_storage_net_name in network_interfaces + + - name: Set Ceph cluster network interface + set_fact: + kolla_ceph_cluster_interface: "{{ ceph_storage_mgmt_net_name | net_interface | replace('-', '_') }}" + when: ceph_storage_mgmt_net_name in network_interfaces + + - name: Set Swift storage network interface + set_fact: + kolla_swift_storage_interface: "{{ swift_storage_net_name | net_interface | replace('-', '_') }}" + when: swift_storage_net_name in network_interfaces + + - name: Set Swift cluster network interface + set_fact: + kolla_swift_replication_interface: "{{ swift_storage_replication_net_name | net_interface | replace('-', '_') }}" + when: swift_storage_replication_net_name in network_interfaces + - name: Set provision network interface set_fact: kolla_provision_interface: "{{ provision_wl_net_name | net_interface | replace('-', '_') }}" diff --git a/ansible/roles/kolla-ansible/defaults/main.yml b/ansible/roles/kolla-ansible/defaults/main.yml index 1c9f33dfb..dac122865 100644 --- a/ansible/roles/kolla-ansible/defaults/main.yml +++ b/ansible/roles/kolla-ansible/defaults/main.yml @@ -109,6 +109,10 @@ kolla_overcloud_inventory_pass_through_host_vars: - "kolla_api_interface" - "kolla_storage_interface" - "kolla_cluster_interface" + - "kolla_ceph_storage_interface" + - "kolla_ceph_cluster_interface" + - "kolla_swift_storage_interface" + - "kolla_swift_replication_interface" - "kolla_provision_interface" - "kolla_inspector_dnsmasq_interface" - "kolla_dns_interface" @@ -126,6 +130,10 @@ kolla_overcloud_inventory_pass_through_host_vars_map: kolla_api_interface: "api_interface" kolla_storage_interface: "storage_interface" kolla_cluster_interface: "cluster_interface" + kolla_ceph_storage_interface: "ceph_storage_interface" + kolla_ceph_cluster_interface: "ceph_cluster_interface" + kolla_swift_storage_interface: "swift_storage_interface" + kolla_swift_replication_interface: "swift_replication_interface" kolla_provision_interface: "provision_interface" kolla_inspector_dnsmasq_interface: "ironic_dnsmasq_interface" kolla_dns_interface: "dns_interface" diff --git a/ansible/roles/kolla-ansible/tests/test-extras.yml b/ansible/roles/kolla-ansible/tests/test-extras.yml index a7add8640..6dfab3773 100644 --- a/ansible/roles/kolla-ansible/tests/test-extras.yml +++ b/ansible/roles/kolla-ansible/tests/test-extras.yml @@ -26,6 +26,10 @@ kolla_provision_interface: "eth8" kolla_inspector_dnsmasq_interface: "eth9" kolla_tunnel_interface: "eth10" + kolla_ceph_storage_interface: "eth11" + kolla_ceph_cluster_interface: "eth12" + kolla_swift_storage_interface: "eth13" + kolla_swift_replication_interface: "eth14" - name: Add a compute host to the inventory add_host: @@ -38,6 +42,7 @@ kolla_neutron_external_interfaces: "eth4,eth5" kolla_neutron_bridge_names: "br0,br1" kolla_tunnel_interface: "eth6" + kolla_ceph_storage_interface: "eth7" - name: Create a temporary directory tempfile: @@ -321,6 +326,10 @@ - kolla_external_vip_interface - storage_interface - cluster_interface + - ceph_storage_interface + - ceph_cluster_interface + - swift_storage_interface + - swift_replication_interface - provision_interface - ironic_dnsmasq_interface - dns_interface @@ -456,6 +465,10 @@ api_interface: "eth2" storage_interface: "eth3" cluster_interface: "eth4" + ceph_storage_interface: "eth11" + ceph_cluster_interface: "eth12" + swift_storage_interface: "eth13" + swift_replication_interface: "eth14" provision_interface: "eth8" ironic_dnsmasq_interface: "eth9" dns_interface: "eth5" @@ -469,6 +482,7 @@ network_interface: "eth0" api_interface: "eth2" storage_interface: "eth3" + ceph_storage_interface: "eth7" tunnel_interface: "eth6" neutron_external_interface: "eth4,eth5" neutron_bridge_name: "br0,br1" diff --git a/ansible/roles/kolla-openstack/vars/main.yml b/ansible/roles/kolla-openstack/vars/main.yml index 457230c41..b295bcf1e 100644 --- a/ansible/roles/kolla-openstack/vars/main.yml +++ b/ansible/roles/kolla-openstack/vars/main.yml @@ -153,6 +153,14 @@ kolla_openstack_custom_config: dest: "{{ kolla_node_custom_config_path }}/swift" patterns: "*" enabled: "{{ kolla_enable_swift }}" + untemplated: + # These are binary files, and should not be templated. + - account.builder + - account.ring.gz + - container.builder + - container.ring.gz + - object.builder + - object.ring.gz # Zookeeper. - src: "{{ kolla_extra_config_path }}/zookeeper" dest: "{{ kolla_node_custom_config_path }}/zookeeper" diff --git a/ansible/roles/swift-block-devices/defaults/main.yml b/ansible/roles/swift-block-devices/defaults/main.yml new file mode 100644 index 000000000..55c0f0562 --- /dev/null +++ b/ansible/roles/swift-block-devices/defaults/main.yml @@ -0,0 +1,11 @@ +--- +# Label used to create partitions. This is used by kolla-ansible to determine +# which disks to mount. +swift_block_devices_part_label: KOLLA_SWIFT_DATA + +# List of block devices to use for Swift. Each item is a dict with the +# following items: +# - 'device': Block device path. Required. +# - 'fs_label': Name of the label used to create the file system on the device. +# Optional. Default is to use the basename of the device. +swift_block_devices: [] diff --git a/ansible/roles/swift-block-devices/tasks/main.yml b/ansible/roles/swift-block-devices/tasks/main.yml new file mode 100644 index 000000000..e5a8ea76c --- /dev/null +++ b/ansible/roles/swift-block-devices/tasks/main.yml @@ -0,0 +1,72 @@ +--- +- name: Fail if swift_block_devices is not in the expected format + fail: + msg: >- + Device {{ device_index }} in swift_block_devices is in an invalid format. + Items should be a dict, containing at least a 'device' field. + with_items: "{{ swift_block_devices }}" + when: item is not mapping or 'device' not in item + loop_control: + index_var: device_index + +- name: Ensure required packages are installed + package: + name: "{{ item }}" + state: installed + become: True + when: swift_block_devices | length > 0 + with_items: + - parted + - xfsprogs + +- name: Check the presence of a partition on the Swift block devices + become: True + parted: + device: "{{ item.device }}" + with_items: "{{ swift_block_devices }}" + loop_control: + label: "{{ item.device }}" + register: swift_disk_info + +- name: Fail if the Swift block devices have already a partition + fail: + msg: > + The physical disk {{ item.item.device }} already has a partition. + Ensure that each disk in 'swift_block_devices' does not have any + partitions. + with_items: "{{ swift_disk_info.results }}" + when: + - item.partitions | length > 0 + - item.partitions.0.name != swift_block_devices_part_label + loop_control: + label: "{{ item.item.device }}" + +- name: Ensure partitions exist for Swift block device + become: True + parted: + device: "{{ item.item.device }}" + number: 1 + label: gpt + name: "{{ swift_block_devices_part_label }}" + state: present + with_items: "{{ swift_disk_info.results }}" + when: item.partitions | length == 0 + loop_control: + label: "{{ item.item.device }}" + +- name: Ensure Swift XFS file systems exist + become: true + filesystem: + dev: "{{ partition_name }}" + force: true + fstype: xfs + opts: "-L {{ fs_label }}" + with_items: "{{ swift_disk_info.results }}" + when: item.partitions | length == 0 + loop_control: + label: "{{ device }}" + index_var: index + vars: + device: "{{ item.item.device }}" + partition_name: "{{ device }}{% if device.startswith('/dev/loop') %}p{% endif %}1" + fs_label: "{{ item.item.fs_label | default(device | basename) }}" diff --git a/ansible/roles/swift-block-devices/tests/main.yml b/ansible/roles/swift-block-devices/tests/main.yml new file mode 100644 index 000000000..c5819f7b1 --- /dev/null +++ b/ansible/roles/swift-block-devices/tests/main.yml @@ -0,0 +1,13 @@ +--- +- include: test-invalid-format.yml +- include: test-mount.yml +- include: test-bootstrapped.yml + +- hosts: localhost + connection: local + tasks: + - name: Fail if any tests failed + fail: + msg: > + Test failures: {{ test_failures }} + when: test_failures is defined diff --git a/ansible/roles/swift-block-devices/tests/test-bootstrapped.yml b/ansible/roles/swift-block-devices/tests/test-bootstrapped.yml new file mode 100644 index 000000000..0786ef159 --- /dev/null +++ b/ansible/roles/swift-block-devices/tests/test-bootstrapped.yml @@ -0,0 +1,68 @@ +--- +# Test case with one device that has already been partitioned. + +- hosts: localhost + connection: local + tasks: + - name: Allocate a temporary file for a fake device + tempfile: + register: tempfile + + - name: Allocate a fake device file + command: fallocate -l 32M {{ tempfile.path }} + + - name: Find a free loopback device + command: losetup -f + register: loopback + become: true + + - name: Create a loopback device + command: losetup {{ loopback.stdout }} {{ tempfile.path }} + become: true + + - name: Add a partition + become: True + parted: + device: "{{ loopback.stdout }}" + number: 1 + label: gpt + name: KOLLA_SWIFT_DATA + state: present + + - block: + - name: Test the swift-block-devices role + include_role: + name: ../../swift-block-devices + vars: + swift_block_devices: + - device: "{{ loopback.stdout }}" + + - name: Get name of fake partition + parted: + device: "{{ loopback.stdout }}" + register: "disk_info" + become: True + + - name: Validate number of partition + assert: + that: disk_info.partitions | length == 1 + msg: > + Number of partitions is not correct. + + - name: Validate partition label is present + assert: + that: "disk_info.partitions.0.name == 'KOLLA_SWIFT_DATA'" + msg: > + Name of partition is not correct. + + always: + - name: Remove the fake file + file: + name: "{{ loopback.stdout }}" + state: absent + become: true + + rescue: + - name: Flag that a failure occurred + set_fact: + test_failures: "{{ test_failures | default(0) | int + 1 }}" diff --git a/ansible/roles/swift-block-devices/tests/test-invalid-format.yml b/ansible/roles/swift-block-devices/tests/test-invalid-format.yml new file mode 100644 index 000000000..05261da31 --- /dev/null +++ b/ansible/roles/swift-block-devices/tests/test-invalid-format.yml @@ -0,0 +1,23 @@ +--- +# Test case with swift_block_devices in an invalid format. + +- hosts: localhost + connection: local + tasks: + - block: + - name: Test the swift-block-devices role + include_role: + name: ../../swift-block-devices + vars: + swift_block_devices: + - /dev/fake + + rescue: + - name: Flag that the error was raised + set_fact: + raised_error: true + + - name: Flag that a failure occurred + set_fact: + test_failures: "{{ test_failures | default(0) | int + 1 }}" + when: raised_error is not defined diff --git a/ansible/roles/swift-block-devices/tests/test-mount.yml b/ansible/roles/swift-block-devices/tests/test-mount.yml new file mode 100644 index 000000000..1579eb72b --- /dev/null +++ b/ansible/roles/swift-block-devices/tests/test-mount.yml @@ -0,0 +1,60 @@ +--- +# Test case with one device that has not yet been tagged by kayobe with the +# kolla-ansible bootstrap label. + +- hosts: localhost + connection: local + tasks: + - name: Allocate a temporary file for a fake device + tempfile: + register: tempfile + + - name: Allocate a fake device file + command: fallocate -l 32M {{ tempfile.path }} + + - name: Find a free loopback device + command: losetup -f + register: loopback + become: true + + - name: Create a loopback device + command: losetup {{ loopback.stdout }} {{ tempfile.path }} + become: true + + - block: + - name: Test the swift-block-devices role + include_role: + name: ../../swift-block-devices + vars: + swift_block_devices: + - device: "{{ loopback.stdout }}" + + - name: Get name of fake partition + parted: + device: "{{ loopback.stdout }}" + register: "disk_info" + become: True + + - name: Validate number of partition + assert: + that: disk_info.partitions | length == 1 + msg: > + Number of partitions is not correct. + + - name: Validate partition label is present + assert: + that: "disk_info.partitions.0.name == 'KOLLA_SWIFT_DATA'" + msg: > + Name of partition is not correct. + + always: + - name: Remove the fake file + file: + name: "{{ loopback.stdout }}" + state: absent + become: true + + rescue: + - name: Flag that a failure occurred + set_fact: + test_failures: "{{ test_failures | default(0) | int + 1 }}" diff --git a/ansible/roles/swift-rings/defaults/main.yml b/ansible/roles/swift-rings/defaults/main.yml new file mode 100644 index 000000000..32ae7dea4 --- /dev/null +++ b/ansible/roles/swift-rings/defaults/main.yml @@ -0,0 +1,49 @@ +--- +# Host on which to build Swift rings. +swift_ring_build_host: + +# Path to Kayobe cnnfigutation for Swift. +swift_config_path: + +# Path in the container in which to build rings. +swift_container_build_path: /tmp/swift-rings + +# Path on the build host in which to store ring files temporarily. +swift_ring_build_path: /tmp/swift-rings + +# Docker image to use to build rings. +swift_ring_build_image: + +# Base-2 logarithm of the number of partitions. +# i.e. num_partitions=2^. +swift_part_power: + +# Object replication count. +swift_replication_count: + +# Minimum time in hours between moving a given partition. +swift_min_part_hours: + +# List of configuration items for each host. Each item is a dict containing the +# following fields: +# - host: hostname +# - region: Swift region +# - zone: Swift zone +# - ip: storage network IP address +# - ports: dict of ports for the storage network +# - replication_ip: replication network IP address +# - replication_ports: dict of ports for the replication network +# - block_devices: list of block devices to use for Swift. Each item is a dict with the +# following items: +# - 'device': Block device path. Required. +# - 'fs_label': Name of the label used to create the file system on the device. +# Optional. Default is to use the basename of the device. +# - 'services': List of services that will use this block device. Optional. +# Default is 'block_device_default_services' for this host. +# - 'weight': Weight of the block device. Optional. Default is +# 'block_device_default_weight' for this host. +# - 'block_device_default_services': default list of services to assign block +# devices on this host to. +# - 'block_device_default_weight': default weight to assign to block devices on +# this host. +swift_host_config: [] diff --git a/ansible/roles/swift-rings/files/swift-ring-builder.py b/ansible/roles/swift-rings/files/swift-ring-builder.py new file mode 100644 index 000000000..bf2ae655a --- /dev/null +++ b/ansible/roles/swift-rings/files/swift-ring-builder.py @@ -0,0 +1,130 @@ +#!/usr/bin/env python + +""" +Script to build a Swift ring from a declarative YAML configuration. This has +been built via a script to avoid repeated 'docker exec' commands which could +take a long time. + +Usage: + +python swift-ring-builder.py + +Example: + +python swift-ring-builder.py /path/to/config.yml /path/to/builds object + +Example configuration format: + +--- +part_power: 10 +replication_count: 3 +min_part_hours: 1 +hosts: + - host: swift1 + region: 1 + zone: 1 + ip: 10.0.0.1 + port: 6001 + replication_ip: 10.1.0.1 + replication_port: 6001 + devices: + - device: /dev/sdb + weight: 100 + - device: /dev/sdc + weight: 100 +""" + +from __future__ import print_function +import subprocess +import sys + +import yaml + + +class RingBuilder(object): + """Helper class for building Swift rings.""" + + def __init__(self, build_path, service_name): + self.build_path = build_path + self.service_name = service_name + + def get_base_command(self): + return [ + 'swift-ring-builder', + '%s/%s.builder' % (self.build_path, self.service_name), + ] + + def create(self, part_power, replication_count, min_part_hours): + cmd = self.get_base_command() + cmd += [ + 'create', + "{}".format(part_power), + "{}".format(replication_count), + "{}".format(min_part_hours), + ] + try: + subprocess.check_call(cmd) + except subprocess.CalledProcessError: + print("Failed to create %s ring" % self.service_name) + sys.exit(1) + + def add_device(self, host, device): + cmd = self.get_base_command() + cmd += [ + 'add', + '--region', "{}".format(host['region']), + '--zone', "{}".format(host['zone']), + '--ip', host['ip'], + '--port', "{}".format(host['port']), + '--replication-ip', host['replication_ip'], + '--replication-port', "{}".format(host['replication_port']), + '--device', device['device'], + '--weight', "{}".format(device['weight']), + ] + try: + subprocess.check_call(cmd) + except subprocess.CalledProcessError: + print("Failed to add device %s on host %s to %s ring" % + (host['host'], device['device'], self.service_name)) + sys.exit(1) + + def rebalance(self): + cmd = self.get_base_command() + cmd += [ + 'rebalance', + ] + try: + subprocess.check_call(cmd) + except subprocess.CalledProcessError: + print("Failed to rebalance %s ring" % self.service_name) + sys.exit(1) + + +def build_rings(config, build_path, service_name): + builder = RingBuilder(build_path, service_name) + builder.create(config['part_power'], config['replication_count'], + config['min_part_hours']) + for host in config['hosts']: + devices = host['devices'] + # If no devices are present for this host, this will be None. + if devices is None: + continue + for device in devices: + builder.add_device(host, device) + builder.rebalance() + + +def main(): + if len(sys.argv) != 4: + raise Exception("Usage: {0} " + "") + config_path = sys.argv[1] + build_path = sys.argv[2] + service_name = sys.argv[3] + with open(config_path) as f: + config = yaml.load(f) + build_rings(config, build_path, service_name) + + +if __name__ == "__main__": + main() diff --git a/ansible/roles/swift-rings/tasks/main.yml b/ansible/roles/swift-rings/tasks/main.yml new file mode 100644 index 000000000..cb5312687 --- /dev/null +++ b/ansible/roles/swift-rings/tasks/main.yml @@ -0,0 +1,69 @@ +--- +# We generate a configuration file and execute a python script in a container +# that builds a ring based on the config file contents. Doing it this way +# avoids a large task loop with docker container for each step, which would be +# quite slow. + +# Execute the following commands on the ring build host. +- block: + # Facts required for ansible_user_uid and ansible_user_gid. + - name: Gather facts for swift ring build host + setup: + + - name: Ensure Swift ring build directory exists + file: + path: "{{ swift_ring_build_path }}" + state: directory + + - name: Ensure Swift ring builder script exists + copy: + src: swift-ring-builder.py + dest: "{{ swift_ring_build_path }}" + + - name: Ensure Swift ring builder configuration exists + template: + src: swift-ring.yml.j2 + dest: "{{ swift_ring_build_path }}/{{ service_name }}-ring.yml" + with_items: "{{ swift_service_names }}" + loop_control: + loop_var: service_name + + - name: Ensure Swift rings exist + docker_container: + cleanup: true + command: >- + python {{ swift_container_build_path }}/swift-ring-builder.py + {{ swift_container_build_path }}/{{ item }}-ring.yml + {{ swift_container_build_path }} + {{ item }} + detach: false + image: "{{ swift_ring_build_image }}" + name: "swift_{{ item }}_ring_builder" + user: "{{ ansible_user_uid }}:{{ ansible_user_gid }}" + volumes: + - "{{ swift_ring_build_path }}/:{{ swift_container_build_path }}/" + with_items: "{{ swift_service_names }}" + + - name: Ensure Swift ring files are copied + fetch: + src: "{{ swift_ring_build_path }}/{{ item[0] }}.{{ item[1] }}" + dest: "{{ swift_config_path }}/{{ item[0] }}.{{ item[1] }}" + flat: true + mode: 0644 + with_nested: + - "{{ swift_service_names }}" + - - ring.gz + - builder + become: true + + always: + - name: Remove Swift ring build directory from build host + file: + path: "{{ swift_ring_build_path }}" + state: absent + + delegate_to: "{{ swift_ring_build_host }}" + vars: + # NOTE: Without this, the seed's ansible_host variable will not be + # respected when using delegate_to. + ansible_host: "{{ hostvars[swift_ring_build_host].ansible_host | default(swift_ring_build_host) }}" diff --git a/ansible/roles/swift-rings/templates/swift-ring.yml.j2 b/ansible/roles/swift-rings/templates/swift-ring.yml.j2 new file mode 100644 index 000000000..08aecc646 --- /dev/null +++ b/ansible/roles/swift-rings/templates/swift-ring.yml.j2 @@ -0,0 +1,21 @@ +--- +part_power: {{ swift_part_power }} +replication_count: {{ swift_replication_count }} +min_part_hours: {{ swift_min_part_hours }} +hosts: +{% for host_config in swift_host_config %} + - host: {{ host_config.host }} + region: {{ host_config.region }} + zone: {{ host_config.zone }} + ip: {{ host_config.ip }} + port: {{ host_config.ports[service_name] }} + replication_ip: {{ host_config.replication_ip }} + replication_port: {{ host_config.replication_ports[service_name] }} + devices: +{% for device in host_config.block_devices %} +{% if service_name in (device.services | default(host_config.block_device_default_services)) %} + - device: {{ device.fs_label | default(device.device | basename) }} + weight: {{ device.weight | default(host_config.block_device_default_weight) }} +{% endif %} +{% endfor %} +{% endfor %} diff --git a/ansible/roles/swift-setup/vars/main.yml b/ansible/roles/swift-rings/vars/main.yml similarity index 100% rename from ansible/roles/swift-setup/vars/main.yml rename to ansible/roles/swift-rings/vars/main.yml diff --git a/ansible/roles/swift-setup/defaults/main.yml b/ansible/roles/swift-setup/defaults/main.yml deleted file mode 100644 index cfc0e1afe..000000000 --- a/ansible/roles/swift-setup/defaults/main.yml +++ /dev/null @@ -1,34 +0,0 @@ ---- -# List of names of block devices to use for Swift. -swift_block_devices: [] - -# Docker image to use to build rings. -swift_image: - -# Host on which to build rings. -swift_ring_build_host: - -# Path in which to build ring files. -swift_ring_build_path: /tmp/swift-rings - -# Ports on which Swift services listen. -swift_service_ports: - object: 6000 - account: 6001 - container: 6002 - -# Base-2 logarithm of the number of partitions. -# i.e. num_partitions=2^. -swift_part_power: - -# Object replication count. -swift_replication_count: - -# Minimum time in hours between moving a given partition. -swift_min_part_hours: - -# ID of the region for this Swift service. -swift_region: - -# ID of the zone for this Swift service. -swift_zone: diff --git a/ansible/roles/swift-setup/tasks/devices.yml b/ansible/roles/swift-setup/tasks/devices.yml deleted file mode 100644 index 0daeee8a0..000000000 --- a/ansible/roles/swift-setup/tasks/devices.yml +++ /dev/null @@ -1,10 +0,0 @@ ---- -- name: Ensure Swift partitions exist - command: parted /dev/{{ item }} -s -- mklabel gpt mkpart KOLLA_SWIFT_DATA 1 -1 - with_items: "{{ swift_block_devices }}" - become: True - -- name: Ensure Swift XFS file systems exist - command: mkfs.xfs -f -L d{{ swift_block_devices.index(item) }} /dev/{{ item }}{% if item.startswith('loop') %}p{% endif %}1 - with_items: "{{ swift_block_devices }}" - become: True diff --git a/ansible/roles/swift-setup/tasks/main.yml b/ansible/roles/swift-setup/tasks/main.yml deleted file mode 100644 index 2927dab7d..000000000 --- a/ansible/roles/swift-setup/tasks/main.yml +++ /dev/null @@ -1,3 +0,0 @@ ---- -- include_tasks: devices.yml -- include_tasks: rings.yml diff --git a/ansible/roles/swift-setup/tasks/rings.yml b/ansible/roles/swift-setup/tasks/rings.yml deleted file mode 100644 index 3ecbca293..000000000 --- a/ansible/roles/swift-setup/tasks/rings.yml +++ /dev/null @@ -1,75 +0,0 @@ ---- -- name: Ensure Swift ring build directory exists - file: - path: "{{ swift_ring_build_path }}" - state: directory - delegate_to: "{{ swift_ring_build_host }}" - run_once: True - -- name: Ensure Swift rings are created - command: > - docker run - --rm - -v {{ swift_ring_build_path }}/:{{ kolla_config_path }}/config/swift/ - {{ swift_image }} - swift-ring-builder {{ kolla_config_path }}/config/swift/{{ item }}.builder create - {{ swift_part_power }} - {{ swift_replication_count }} - {{ swift_min_part_hours }} - with_items: "{{ swift_service_names }}" - delegate_to: "{{ swift_ring_build_host }}" - run_once: True - -- name: Ensure devices are added to Swift rings - command: > - docker run - --rm - -v {{ swift_ring_build_path }}/:{{ kolla_config_path }}/config/swift/ - {{ swift_image }} - swift-ring-builder {{ kolla_config_path }}/config/swift/{{ item[0] }}.builder add - --region {{ swift_region }} - --zone {{ swift_zone }} - --ip {{ internal_net_name | net_ip }} - --port {{ swift_service_ports[item[0]] }} - --device {{ item[1] }} - --weight 100 - with_nested: - - "{{ swift_service_names }}" - - "{{ swift_block_devices }}" - delegate_to: "{{ swift_ring_build_host }}" - -- name: Ensure Swift rings are rebalanced - command: > - docker run - --rm - -v {{ swift_ring_build_path }}/:{{ kolla_config_path }}/config/swift/ - {{ swift_image }} - swift-ring-builder {{ kolla_config_path }}/config/swift/{{ item }}.builder rebalance - with_items: "{{ swift_service_names }}" - delegate_to: "{{ swift_ring_build_host }}" - run_once: True - -- name: Ensure Swift ring files are copied - local_action: - module: copy - src: "{{ swift_ring_build_path }}/{{ item[0] }}.{{ item[1] }}" - dest: "{{ kolla_config_path }}/config/swift/{{ item[0] }}.{{ item[1] }}" - remote_src: True - owner: "{{ ansible_user_uid }}" - group: "{{ ansible_user_gid }}" - mode: 0644 - with_nested: - - "{{ swift_service_names }}" - - - ring.gz - - builder - delegate_to: "{{ swift_ring_build_host }}" - become: True - run_once: True - -- name: Remove Swift ring build directory from build host - file: - path: "{{ swift_ring_build_path }}" - state: absent - delegate_to: "{{ swift_ring_build_host }}" - become: True - run_once: True diff --git a/ansible/swift-block-devices.yml b/ansible/swift-block-devices.yml new file mode 100644 index 000000000..c476a7e30 --- /dev/null +++ b/ansible/swift-block-devices.yml @@ -0,0 +1,11 @@ +--- +- name: Ensure Swift block devices are prepared + hosts: "{{ swift_hosts }}" + vars: + swift_hosts: storage + tags: + - swift + - swift-block-devices + roles: + - role: swift-block-devices + when: kolla_enable_swift | bool diff --git a/ansible/swift-rings.yml b/ansible/swift-rings.yml new file mode 100644 index 000000000..e78da2dde --- /dev/null +++ b/ansible/swift-rings.yml @@ -0,0 +1,35 @@ +--- +- name: Ensure swift ring files exist + hosts: localhost + tags: + - swift + - swift-rings + tasks: + - name: Initialise a fact about swift hosts + set_fact: + swift_host_config: [] + + - name: Update a fact about Swift hosts + set_fact: + swift_host_config: "{{ swift_host_config + [swift_host] }}" + vars: + swift_host: + host: "{{ host }}" + region: "{{ hostvars[host]['swift_region'] }}" + zone: "{{ hostvars[host]['swift_zone'] }}" + ip: "{{ swift_storage_net_name | net_ip(inventory_hostname=host) }}" + ports: "{{ hostvars[host]['swift_service_ports'] }}" + replication_ip: "{{ swift_storage_replication_net_name | net_ip(inventory_hostname=host) }}" + replication_ports: "{{ hostvars[host]['swift_service_ports'] }}" + block_devices: "{{ hostvars[host]['swift_block_devices'] }}" + block_device_default_services: "{{ hostvars[host]['swift_block_device_default_services'] }}" + block_device_default_weight: "{{ hostvars[host]['swift_block_device_default_weight'] }}" + with_inventory_hostnames: "{{ swift_hosts }}" + loop_control: + loop_var: host + + - include_role: + name: swift-rings + vars: + swift_config_path: "{{ kayobe_config_path }}/kolla/config/swift" + when: kolla_enable_swift | bool diff --git a/ansible/swift-setup.yml b/ansible/swift-setup.yml deleted file mode 100644 index 14b924d89..000000000 --- a/ansible/swift-setup.yml +++ /dev/null @@ -1,13 +0,0 @@ ---- -- hosts: controllers - tags: - - swift - roles: - - role: swift-setup - swift_image: "kolla/{{ kolla_base_distro }}-{{ kolla_install_type }}-swift-base:{{ kolla_openstack_release }}" - swift_ring_build_host: "{{ groups['controllers'][0] }}" - # ID of the region for this Swift service. - swift_region: 1 - # ID of the zone for this Swift service. - swift_zone: "{{ groups['controllers'].index(inventory_hostname) % swift_num_zones }}" - when: kolla_enable_swift | bool diff --git a/doc/source/configuration/network.rst b/doc/source/configuration/network.rst index a674d8513..455568e07 100644 --- a/doc/source/configuration/network.rst +++ b/doc/source/configuration/network.rst @@ -475,6 +475,18 @@ Storage network (``storage_net_name``) Name of the network used to carry storage data traffic. Storage management network (``storage_mgmt_net_name``) Name of the network used to carry storage management traffic. +Ceph storage network (``ceph_storage_net_name``) + Name of the network used to carry Ceph storage data traffic. + Defaults to the storage network (``storage_net_name``). +Ceph storage management network (``ceph_storage_mgmt_net_name``) + Name of the network used to carry storage management traffic. + Defaults to the storage management network (``storage_mgmt_net_name``) +Swift storage network (``swift_storage_net_name``) + Name of the network used to carry Swift storage data traffic. + Defaults to the storage network (``storage_net_name``). +Swift storage replication network (``swift_storage_replication_net_name``) + Name of the network used to carry storage management traffic. + Defaults to the storage management network (``storage_mgmt_net_name``) Workload inspection network (``inspection_net_name``) Name of the network used to perform hardware introspection on the bare metal workload hosts. @@ -501,6 +513,10 @@ To configure network roles in a system with two networks, ``example1`` and external_net_name: example2 storage_net_name: example2 storage_mgmt_net_name: example2 + ceph_storage_net_name: example2 + ceph_storage_mgmt_net_name: example2 + swift_storage_net_name: example2 + swift_replication_net_name: example2 inspection_net_name: example2 cleaning_net_name: example2 @@ -733,6 +749,19 @@ a list of names of additional networks to attach. Alternatively, the list may be completely overridden by setting ``monitoring_network_interfaces``. These variables are found in ``${KAYOBE_CONFIG_PATH}/monitoring.yml``. +Storage Hosts +------------- + +By default, the storage hosts are attached to the following networks: + +* overcloud admin network +* internal network +* storage network +* storage management network + +In addition, if Ceph or Swift is enabled, they can also be attached to the Ceph and Swift +mangagment and replication networks. + Virtualised Compute Hosts ------------------------- diff --git a/doc/source/deployment.rst b/doc/source/deployment.rst index f2a6320db..80a8c9b3a 100644 --- a/doc/source/deployment.rst +++ b/doc/source/deployment.rst @@ -391,6 +391,18 @@ should be set to ``True``. To build images locally:: If images have been built previously, they will not be rebuilt. To force rebuilding images, use the ``--force-rebuild`` argument. +Building Swift Rings +-------------------- + +.. note:: + + This section can be skipped if Swift is not in use. + +Swift uses ring files to control placement of data across a cluster. These +files can be generated automatically using the following command:: + + (kayobe) $ kayobe overcloud swift rings generate + Deploying Containerised Services -------------------------------- diff --git a/etc/kayobe/ceph.yml b/etc/kayobe/ceph.yml new file mode 100644 index 000000000..50ceedc5b --- /dev/null +++ b/etc/kayobe/ceph.yml @@ -0,0 +1,7 @@ +--- +############################################################################### +# OpenStack Ceph configuration. + +# Ansible host pattern matching hosts on which Ceph storage services +# are deployed. The default is to use hosts in the 'storage' group. +#ceph_hosts: diff --git a/etc/kayobe/inventory/group_vars/compute/network-interfaces b/etc/kayobe/inventory/group_vars/compute/network-interfaces index 421f69d39..1aab473f4 100644 --- a/etc/kayobe/inventory/group_vars/compute/network-interfaces +++ b/etc/kayobe/inventory/group_vars/compute/network-interfaces @@ -22,6 +22,11 @@ # storage_net_bridge_ports: # storage_net_bond_slaves: +# Ceph storage network IP information. +# ceph_storage_net_interface: +# ceph_storage_net_bridge_ports: +# ceph_storage_net_bond_slaves: + ############################################################################### # Dummy variable to allow Ansible to accept this file. workaround_ansible_issue_8743: yes diff --git a/etc/kayobe/inventory/group_vars/controllers/network-interfaces b/etc/kayobe/inventory/group_vars/controllers/network-interfaces index 0f4964de2..3bd95e1ef 100644 --- a/etc/kayobe/inventory/group_vars/controllers/network-interfaces +++ b/etc/kayobe/inventory/group_vars/controllers/network-interfaces @@ -32,6 +32,16 @@ # storage_mgmt_net_bridge_ports: # storage_mgmt_net_bond_slaves: +# Storage network IP information. +# ceph_storage_net_interface: +# ceph_storage_net_bridge_ports: +# ceph_storage_net_bond_slaves: + +# Storage management network IP information. +# swift_storage_net_interface: +# swift_storage_net_bridge_ports: +# swift_storage_net_bond_slaves: + ############################################################################### # Dummy variable to allow Ansible to accept this file. workaround_ansible_issue_8743: yes diff --git a/etc/kayobe/inventory/group_vars/storage/network-interfaces b/etc/kayobe/inventory/group_vars/storage/network-interfaces new file mode 100644 index 000000000..22654ef7c --- /dev/null +++ b/etc/kayobe/inventory/group_vars/storage/network-interfaces @@ -0,0 +1,47 @@ +--- +############################################################################### +# Network interface definitions for the storage group. + +# Overcloud provisioning network IP information. +# provision_oc_net_interface: +# provision_oc_net_bridge_ports: +# provision_oc_net_bond_slaves: + +# External network IP information. +# external_net_interface: +# external_net_bridge_ports: +# external_net_bond_slaves: + +# Storage network IP information. +# storage_net_interface: +# storage_net_bridge_ports: +# storage_net_bond_slaves: + +# Storage management network IP information. +# storage_mgmt_net_interface: +# storage_mgmt_net_bridge_ports: +# storage_mgmt_net_bond_slaves: + +# Ceph storage network IP information. +# ceph_storage_net_interface: +# ceph_storage_net_bridge_ports: +# ceph_storage_net_bond_slaves: + +# Ceph storage management network IP information. +# ceph_storage_mgmt_net_interface: +# ceph_storage_mgmt_net_bridge_ports: +# ceph_storage_mgmt_net_bond_slaves: + +# Swift storage network IP information. +# swift_storage_net_interface: +# swift_storage_net_bridge_ports: +# swift_storage_net_bond_slaves: + +# Swift storage management network IP information. +# swift_storage_replication_net_interface: +# swift_storage_replication_net_bridge_ports: +# swift_storage_replication_net_bond_slaves: + +############################################################################### +# Dummy variable to allow Ansible to accept this file. +workaround_ansible_issue_8743: yes diff --git a/etc/kayobe/networks.yml b/etc/kayobe/networks.yml index 4062c6f58..b3a27ebc1 100644 --- a/etc/kayobe/networks.yml +++ b/etc/kayobe/networks.yml @@ -45,6 +45,18 @@ # Name of the network used to carry storage management traffic. #storage_mgmt_net_name: +# Name of the network used to carry ceph storage data traffic. +#ceph_storage_net_name: + +# Name of the network used to carry ceph storage management traffic. +#ceph_storage_mgmt_net_name: + +# Name of the network used to carry swift storage data traffic. +#swift_storage_net_name: + +# Name of the network used to carry swift storage replication traffic. +#swift_storage_replication_net_name: + # Name of the network used to perform hardware introspection on the bare metal # workload hosts. #inspection_net_name: diff --git a/etc/kayobe/storage.yml b/etc/kayobe/storage.yml index e1e1795cc..69e12526b 100644 --- a/etc/kayobe/storage.yml +++ b/etc/kayobe/storage.yml @@ -18,6 +18,16 @@ # List of extra networks to which storage nodes are attached. #storage_extra_network_interfaces: +# Whether this host requires access to Ceph networks. +#storage_needs_ceph_network: + +#storage_needs_ceph_mgmt_network: + +# Whether this host requires access to Swift networks. +#storage_needs_swift_network: + +#storage_needs_swift_replication_network: + ############################################################################### # Storage node BIOS configuration. diff --git a/etc/kayobe/swift.yml b/etc/kayobe/swift.yml index 3af868cf9..c8812b068 100644 --- a/etc/kayobe/swift.yml +++ b/etc/kayobe/swift.yml @@ -2,18 +2,63 @@ ############################################################################### # OpenStack Swift configuration. +# Short name of the kolla container image used to build rings. Default is the +# swift=object image. +#swift_ring_build_image_name: + +# Full name of the kolla container image used to build rings. +#swift_ring_build_image: + +# Ansible host pattern matching hosts on which Swift object storage services +# are deployed. The default is to use hosts in the 'storage' group. +#swift_hosts: + +# Name of the host used to build Swift rings. Default is the first host of +# 'swift_hosts'. +#swift_ring_build_host: + +# ID of the Swift region for this host. Default is 1. +#swift_region: + +# ID of the Swift zone. This can be set to different values for different hosts +# to place them in different zones. Default is 0. +#swift_zone: + # Base-2 logarithm of the number of partitions. -# i.e. num_partitions=2^. +# i.e. num_partitions=2^. Default is 10. #swift_part_power: -# Object replication count. +# Object replication count. Default is the smaller of the number of Swift +# hosts, or 3. #swift_replication_count: -# Minimum time in hours between moving a given partition. +# Minimum time in hours between moving a given partition. Default is 1. #swift_min_part_hours: -# Number of Swift Zones. -#swift_num_zones: +# Ports on which Swift services listen. Default is: +# object: 6000 +# account: 6001 +# container: 6002 +#swift_service_ports: + +# List of block devices to use for Swift. Each item is a dict with the +# following items: +# - 'device': Block device path. Required. +# - 'fs_label': Name of the label used to create the file system on the device. +# Optional. Default is to use the basename of the device. +# - 'services': List of services that will use this block device. Optional. +# Default is 'swift_block_device_default_services'. Allowed items are +# 'account', 'container', and 'object'. +# - 'weight': Weight of the block device. Optional. Default is +# 'swift_block_device_default_weight'. +#swift_block_devices: + +# Default weight to assign to block devices in the ring. Default is 100. +#swift_block_device_default_weight: + +# Default list of services to assign block devices to. Allowed items are +# 'account', 'container', and 'object'. Default value is all of these. +#swift_block_device_default_services: ############################################################################### # Dummy variable to allow Ansible to accept this file. diff --git a/kayobe/cli/commands.py b/kayobe/cli/commands.py index ea4cf5d19..c0659b44d 100644 --- a/kayobe/cli/commands.py +++ b/kayobe/cli/commands.py @@ -901,7 +901,7 @@ class OvercloudHostConfigure(KollaAnsibleMixin, KayobeAnsibleMixin, VaultMixin, # Further kayobe playbooks. playbooks = _build_playbook_list( "pip", "kolla-target-venv", "kolla-host", - "docker", "ceph-block-devices") + "docker", "ceph-block-devices", "swift-block-devices") self.run_kayobe_playbooks(parsed_args, playbooks, extra_vars=extra_vars, limit="overcloud") @@ -996,7 +996,8 @@ class OvercloudServiceConfigurationGenerate(KayobeAnsibleMixin, playbooks = _build_playbook_list("kolla-ansible") self.run_kayobe_playbooks(parsed_args, playbooks, tags="config") - playbooks = _build_playbook_list("kolla-openstack", "swift-setup") + playbooks = _build_playbook_list("kolla-openstack") + self.run_kayobe_playbooks(parsed_args, playbooks) # Run kolla-ansible prechecks before deployment. @@ -1085,7 +1086,8 @@ class OvercloudServiceDeploy(KollaAnsibleMixin, KayobeAnsibleMixin, VaultMixin, playbooks = _build_playbook_list("kolla-ansible") self.run_kayobe_playbooks(parsed_args, playbooks, tags="config") - playbooks = _build_playbook_list("kolla-openstack", "swift-setup") + playbooks = _build_playbook_list("kolla-openstack") + self.run_kayobe_playbooks(parsed_args, playbooks) # Run kolla-ansible prechecks before deployment. @@ -1142,7 +1144,8 @@ class OvercloudServiceReconfigure(KollaAnsibleMixin, KayobeAnsibleMixin, playbooks = _build_playbook_list("kolla-ansible") self.run_kayobe_playbooks(parsed_args, playbooks, tags="config") - playbooks = _build_playbook_list("kolla-openstack", "swift-setup") + playbooks = _build_playbook_list("kolla-openstack") + self.run_kayobe_playbooks(parsed_args, playbooks) # Run kolla-ansible prechecks before reconfiguration. @@ -1353,6 +1356,15 @@ class OvercloudPostConfigure(KayobeAnsibleMixin, VaultMixin, Command): self.run_kayobe_playbooks(parsed_args, playbooks) +class OvercloudSwiftRingsGenerate(KayobeAnsibleMixin, VaultMixin, Command): + """Generate Swift rings.""" + + def take_action(self, parsed_args): + self.app.LOG.debug("Generating Swift rings") + playbooks = _build_playbook_list("swift-rings") + self.run_kayobe_playbooks(parsed_args, playbooks) + + class NetworkConnectivityCheck(KayobeAnsibleMixin, VaultMixin, Command): """Check network connectivity between hosts in the control plane. diff --git a/kayobe/tests/unit/cli/test_commands.py b/kayobe/tests/unit/cli/test_commands.py index e4e23fef9..83fa6cbd4 100644 --- a/kayobe/tests/unit/cli/test_commands.py +++ b/kayobe/tests/unit/cli/test_commands.py @@ -1001,6 +1001,8 @@ class TestCase(unittest.TestCase): utils.get_data_files_path("ansible", "docker.yml"), utils.get_data_files_path( "ansible", "ceph-block-devices.yml"), + utils.get_data_files_path( + "ansible", "swift-block-devices.yml"), ], limit="overcloud", extra_vars={"pip_applicable_users": [None]}, @@ -1425,6 +1427,26 @@ class TestCase(unittest.TestCase): ] self.assertEqual(expected_calls, mock_run.call_args_list) + @mock.patch.object(commands.KayobeAnsibleMixin, + "run_kayobe_playbooks") + def test_overcloud_swift_rings_generate(self, mock_run): + command = commands.OvercloudSwiftRingsGenerate(TestApp(), []) + parser = command.get_parser("test") + parsed_args = parser.parse_args([]) + + result = command.run(parsed_args) + self.assertEqual(0, result) + + expected_calls = [ + mock.call( + mock.ANY, + [ + utils.get_data_files_path("ansible", "swift-rings.yml"), + ], + ), + ] + self.assertEqual(expected_calls, mock_run.call_args_list) + @mock.patch.object(commands.KayobeAnsibleMixin, "run_kayobe_playbooks") def test_baremetal_compute_inspect(self, mock_run): diff --git a/releasenotes/notes/seperate-storage-networks-a659bcd30dd70665.yaml b/releasenotes/notes/seperate-storage-networks-a659bcd30dd70665.yaml new file mode 100644 index 000000000..d19c56991 --- /dev/null +++ b/releasenotes/notes/seperate-storage-networks-a659bcd30dd70665.yaml @@ -0,0 +1,17 @@ +--- +features: + - | + Add support for seperate storage networks for both Ceph and Swift. + This adds four additional networks, which can be used to seperate the storage + network traffic as follows: + + * Ceph storage network (ceph_storage_net_name) is used to carry Ceph storage + data traffic. Defaults to the storage network (storage_net_name). + * Ceph storage management network (ceph_storage_mgmt_net_name) is used to carry + storage management traffic. Defaults to the storage management network + (storage_mgmt_net_name). + * Swift storage network (swift_storage_net_name) is used to carry Swift storage data + traffic. Defaults to the storage network (storage_net_name). + * Swift storage replication network (swift_storage_replication_net_name) is used to + carry storage management traffic. Defaults to the storage management network + (storage_mgmt_net_name). diff --git a/releasenotes/notes/swift-improvements-07a2b75967f642e8.yaml b/releasenotes/notes/swift-improvements-07a2b75967f642e8.yaml new file mode 100644 index 000000000..5d5f2bff6 --- /dev/null +++ b/releasenotes/notes/swift-improvements-07a2b75967f642e8.yaml @@ -0,0 +1,15 @@ +--- +features: + - | + Improvements to Swift device management and ring generation. + + The device management and ring generation are now separate, with device management + occurring during 'kayobe overcloud host configure', and ring generation during a + new command, 'kayobe overcloud swift rings generate'. + + For the device management, we now use standard Ansible modules rather than commands + for device preparation. File system labels can be configured for each device individually. + + For ring generation, all commands are run on a single host, by default a host in the Swift + storage group. A python script runs in one of the kolla Swift containers, which consumes + an autogenerated YAML config file that defines the layout of the rings. diff --git a/setup.cfg b/setup.cfg index 872582e1c..db1a0c6a6 100644 --- a/setup.cfg +++ b/setup.cfg @@ -70,6 +70,7 @@ kayobe.cli= overcloud_service_destroy = kayobe.cli.commands:OvercloudServiceDestroy overcloud_service_reconfigure = kayobe.cli.commands:OvercloudServiceReconfigure overcloud_service_upgrade = kayobe.cli.commands:OvercloudServiceUpgrade + overcloud_swift_rings_generate = kayobe.cli.commands:OvercloudSwiftRingsGenerate physical_network_configure = kayobe.cli.commands:PhysicalNetworkConfigure playbook_run = kayobe.cli.commands:PlaybookRun seed_container_image_build = kayobe.cli.commands:SeedContainerImageBuild