Add LVM NVMe support

This patch adds NVMe LVM support to the existing iSCSI LVM configuration
support.

We deprecate the CINDER_ISCSI_HELPER configuration option since we are
no longer limited to iSCSI, and replace it with the CINDER_TARGET_HELPER
option.

The patch also adds another 3 target configuration options:

- CINDER_TARGET_PROTOCOL
- CINDER_TARGET_PREFIX
- CINDER_TARGET_PORT

These options will have different defaults based on the selected target
helper.  For tgtadm and lioadm they'll be iSCSI,
iqn.2010-10.org.openstack:, and 3260 respectively, and for nvmet they'll
be nvmet_rdma, nvme-subsystem-1, and 4420.

Besides nvmet_rdma the CINDER_TARGET_PROTOCOL option can also be set to
nvmet_tcp, and nvmet_fc.

For the RDMA transport protocol devstack will be using Soft-RoCE and
creating a device on top of the network interface.

LVM NVMe-TCP support is added in the dependency mentioned in the footer
and LVM NVMe-FC will be added in later patches (need os-brick and cinder
patches) but the code here should still be valid.

Change-Id: I6578cdc27489b34916cdeb72ba3fdf06ea9d4ad8
This commit is contained in:
Gorka Eguileor 2021-10-14 09:55:56 +02:00
parent bd6e5205b1
commit 97061c9a1f
6 changed files with 154 additions and 27 deletions

View File

@ -669,6 +669,35 @@ adjusted by setting ``CINDER_QUOTA_VOLUMES``, ``CINDER_QUOTA_BACKUPS``,
or ``CINDER_QUOTA_SNAPSHOTS`` to the desired value. (The default for
each is 10.)
DevStack's Cinder LVM configuration module currently supports both iSCSI and
NVMe connections, and we can choose which one to use with options
``CINDER_TARGET_HELPER``, ``CINDER_TARGET_PROTOCOL``, ``CINDER_TARGET_PREFIX``,
and ``CINDER_TARGET_PORT``.
Defaults use iSCSI with the LIO target manager::
CINDER_TARGET_HELPER="lioadm"
CINDER_TARGET_PROTOCOL="iscsi"
CINDER_TARGET_PREFIX="iqn.2010-10.org.openstack:"
CINDER_TARGET_PORT=3260
Additionally there are 3 supported transport protocols for NVMe,
``nvmet_rdma``, ``nvmet_tcp``, and ``nvmet_fc``, and when the ``nvmet`` target
is selected the protocol, prefix, and port defaults will change to more
sensible defaults for NVMe::
CINDER_TARGET_HELPER="nvmet"
CINDER_TARGET_PROTOCOL="nvmet_rdma"
CINDER_TARGET_PREFIX="nvme-subsystem-1"
CINDER_TARGET_PORT=4420
When selecting the RDMA transport protocol DevStack will create on Cinder nodes
a Software RoCE device on top of the ``HOST_IP_IFACE`` and if it is not defined
then on top of the interface with IP address ``HOST_IP`` or ``HOST_IPV6``.
This Soft-RoCE device will always be created on the Nova compute side since we
cannot tell beforehand whether there will be an RDMA connection or not.
Keystone
~~~~~~~~

View File

@ -43,6 +43,13 @@ GITDIR["python-cinderclient"]=$DEST/python-cinderclient
GITDIR["python-brick-cinderclient-ext"]=$DEST/python-brick-cinderclient-ext
CINDER_DIR=$DEST/cinder
if [[ $SERVICE_IP_VERSION == 6 ]]; then
CINDER_MY_IP="$HOST_IPV6"
else
CINDER_MY_IP="$HOST_IP"
fi
# Cinder virtual environment
if [[ ${USE_VENV} = True ]]; then
PROJECT_VENV["cinder"]=${CINDER_DIR}.venv
@ -88,13 +95,32 @@ CINDER_ENABLED_BACKENDS=${CINDER_ENABLED_BACKENDS:-lvm:lvmdriver-1}
CINDER_VOLUME_CLEAR=${CINDER_VOLUME_CLEAR:-${CINDER_VOLUME_CLEAR_DEFAULT:-zero}}
CINDER_VOLUME_CLEAR=$(echo ${CINDER_VOLUME_CLEAR} | tr '[:upper:]' '[:lower:]')
# Default to lioadm
CINDER_ISCSI_HELPER=${CINDER_ISCSI_HELPER:-lioadm}
if [[ -n "$CINDER_ISCSI_HELPER" ]]; then
if [[ -z "$CINDER_TARGET_HELPER" ]]; then
deprecated 'Using CINDER_ISCSI_HELPER is deprecated, use CINDER_TARGET_HELPER instead'
CINDER_TARGET_HELPER="$CINDER_ISCSI_HELPER"
else
deprecated 'Deprecated CINDER_ISCSI_HELPER is set, but is being overwritten by CINDER_TARGET_HELPER'
fi
fi
CINDER_TARGET_HELPER=${CINDER_TARGET_HELPER:-lioadm}
if [[ $CINDER_TARGET_HELPER == 'nvmet' ]]; then
CINDER_TARGET_PROTOCOL=${CINDER_TARGET_PROTOCOL:-'nvmet_rdma'}
CINDER_TARGET_PREFIX=${CINDER_TARGET_PREFIX:-'nvme-subsystem-1'}
CINDER_TARGET_PORT=${CINDER_TARGET_PORT:-4420}
else
CINDER_TARGET_PROTOCOL=${CINDER_TARGET_PROTOCOL:-'iscsi'}
CINDER_TARGET_PREFIX=${CINDER_TARGET_PREFIX:-'iqn.2010-10.org.openstack:'}
CINDER_TARGET_PORT=${CINDER_TARGET_PORT:-3260}
fi
# EL and SUSE should only use lioadm
if is_fedora || is_suse; then
if [[ ${CINDER_ISCSI_HELPER} != "lioadm" ]]; then
die "lioadm is the only valid Cinder target_helper config on this platform"
if [[ ${CINDER_TARGET_HELPER} != "lioadm" && ${CINDER_TARGET_HELPER} != 'nvmet' ]]; then
die "lioadm and nvmet are the only valid Cinder target_helper config on this platform"
fi
fi
@ -187,7 +213,7 @@ function _cinder_cleanup_apache_wsgi {
function cleanup_cinder {
# ensure the volume group is cleared up because fails might
# leave dead volumes in the group
if [ "$CINDER_ISCSI_HELPER" = "tgtadm" ]; then
if [ "$CINDER_TARGET_HELPER" = "tgtadm" ]; then
local targets
targets=$(sudo tgtadm --op show --mode target)
if [ $? -ne 0 ]; then
@ -215,8 +241,14 @@ function cleanup_cinder {
else
stop_service tgtd
fi
else
elif [ "$CINDER_TARGET_HELPER" = "lioadm" ]; then
sudo cinder-rtstool get-targets | sudo xargs -rn 1 cinder-rtstool delete
elif [ "$CINDER_TARGET_HELPER" = "nvmet" ]; then
# If we don't disconnect everything vgremove will block
sudo nvme disconnect-all
sudo nvmetcli clear
else
die $LINENO "Unknown value \"$CINDER_TARGET_HELPER\" for CINDER_TARGET_HELPER"
fi
if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
@ -267,7 +299,7 @@ function configure_cinder {
iniset $CINDER_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $CINDER_CONF DEFAULT target_helper "$CINDER_ISCSI_HELPER"
iniset $CINDER_CONF DEFAULT target_helper "$CINDER_TARGET_HELPER"
iniset $CINDER_CONF database connection `database_connection_url cinder`
iniset $CINDER_CONF DEFAULT api_paste_config $CINDER_API_PASTE_INI
iniset $CINDER_CONF DEFAULT rootwrap_config "$CINDER_CONF_DIR/rootwrap.conf"
@ -275,11 +307,7 @@ function configure_cinder {
iniset $CINDER_CONF DEFAULT osapi_volume_listen $CINDER_SERVICE_LISTEN_ADDRESS
iniset $CINDER_CONF DEFAULT state_path $CINDER_STATE_PATH
iniset $CINDER_CONF oslo_concurrency lock_path $CINDER_STATE_PATH
if [[ $SERVICE_IP_VERSION == 6 ]]; then
iniset $CINDER_CONF DEFAULT my_ip "$HOST_IPV6"
else
iniset $CINDER_CONF DEFAULT my_ip "$HOST_IP"
fi
iniset $CINDER_CONF DEFAULT my_ip "$CINDER_MY_IP"
iniset $CINDER_CONF key_manager backend cinder.keymgr.conf_key_mgr.ConfKeyManager
iniset $CINDER_CONF key_manager fixed_key $(openssl rand -hex 16)
if [[ -n "$CINDER_ALLOWED_DIRECT_URL_SCHEMES" ]]; then
@ -465,9 +493,9 @@ function init_cinder {
function install_cinder {
git_clone $CINDER_REPO $CINDER_DIR $CINDER_BRANCH
setup_develop $CINDER_DIR
if [[ "$CINDER_ISCSI_HELPER" == "tgtadm" ]]; then
if [[ "$CINDER_TARGET_HELPER" == "tgtadm" ]]; then
install_package tgt
elif [[ "$CINDER_ISCSI_HELPER" == "lioadm" ]]; then
elif [[ "$CINDER_TARGET_HELPER" == "lioadm" ]]; then
if is_ubuntu; then
# TODO(frickler): Workaround for https://launchpad.net/bugs/1819819
sudo mkdir -p /etc/target
@ -476,6 +504,43 @@ function install_cinder {
else
install_package targetcli
fi
elif [[ "$CINDER_TARGET_HELPER" == "nvmet" ]]; then
install_package nvme-cli
# TODO: Remove manual installation of the dependency when the
# requirement is added to nvmetcli:
# http://lists.infradead.org/pipermail/linux-nvme/2022-July/033576.html
if is_ubuntu; then
install_package python3-configshell-fb
else
install_package python3-configshell
fi
# Install from source because Ubuntu doesn't have the package and some packaged versions didn't work on Python 3
pip_install git+git://git.infradead.org/users/hch/nvmetcli.git
sudo modprobe nvmet
sudo modprobe nvme-fabrics
if [[ $CINDER_TARGET_PROTOCOL == 'nvmet_rdma' ]]; then
install_package rdma-core
sudo modprobe nvme-rdma
# Create the Soft-RoCE device over the networking interface
local iface=${HOST_IP_IFACE:-`ip -br -$SERVICE_IP_VERSION a | grep $CINDER_MY_IP | awk '{print $1}'`}
if [[ -z "$iface" ]]; then
die $LINENO "Cannot find interface to bind Soft-RoCE"
fi
if ! sudo rdma link | grep $iface ; then
sudo rdma link add rxe_$iface type rxe netdev $iface
fi
elif [[ $CINDER_TARGET_PROTOCOL == 'nvmet_tcp' ]]; then
sudo modprobe nvme-tcp
else # 'nvmet_fc'
sudo modprobe nvme-fc
fi
fi
}
@ -512,7 +577,7 @@ function start_cinder {
service_port=$CINDER_SERVICE_PORT_INT
service_protocol="http"
fi
if [ "$CINDER_ISCSI_HELPER" = "tgtadm" ]; then
if [ "$CINDER_TARGET_HELPER" = "tgtadm" ]; then
if is_service_enabled c-vol; then
# Delete any old stack.conf
sudo rm -f /etc/tgt/conf.d/stack.conf

View File

@ -50,7 +50,7 @@ function configure_cinder_backend_lvm {
iniset $CINDER_CONF $be_name volume_backend_name $be_name
iniset $CINDER_CONF $be_name volume_driver "cinder.tests.fake_driver.FakeGateDriver"
iniset $CINDER_CONF $be_name volume_group $VOLUME_GROUP_NAME-$be_name
iniset $CINDER_CONF $be_name target_helper "$CINDER_ISCSI_HELPER"
iniset $CINDER_CONF $be_name target_helper "$CINDER_TARGET_HELPER"
iniset $CINDER_CONF $be_name lvm_type "$CINDER_LVM_TYPE"
if [[ "$CINDER_VOLUME_CLEAR" == "non" ]]; then

View File

@ -50,7 +50,10 @@ function configure_cinder_backend_lvm {
iniset $CINDER_CONF $be_name volume_backend_name $be_name
iniset $CINDER_CONF $be_name volume_driver "cinder.volume.drivers.lvm.LVMVolumeDriver"
iniset $CINDER_CONF $be_name volume_group $VOLUME_GROUP_NAME-$be_name
iniset $CINDER_CONF $be_name target_helper "$CINDER_ISCSI_HELPER"
iniset $CINDER_CONF $be_name target_helper "$CINDER_TARGET_HELPER"
iniset $CINDER_CONF $be_name target_protocol "$CINDER_TARGET_PROTOCOL"
iniset $CINDER_CONF $be_name target_port "$CINDER_TARGET_PORT"
iniset $CINDER_CONF $be_name target_prefix "$CINDER_TARGET_PREFIX"
iniset $CINDER_CONF $be_name lvm_type "$CINDER_LVM_TYPE"
iniset $CINDER_CONF $be_name volume_clear "$CINDER_VOLUME_CLEAR"
}

10
lib/lvm
View File

@ -130,7 +130,7 @@ function init_lvm_volume_group {
local size=$2
# Start the tgtd service on Fedora and SUSE if tgtadm is used
if is_fedora || is_suse && [[ "$CINDER_ISCSI_HELPER" = "tgtadm" ]]; then
if is_fedora || is_suse && [[ "$CINDER_TARGET_HELPER" = "tgtadm" ]]; then
start_service tgtd
fi
@ -138,10 +138,14 @@ function init_lvm_volume_group {
_create_lvm_volume_group $vg $size
# Remove iscsi targets
if [ "$CINDER_ISCSI_HELPER" = "lioadm" ]; then
if [ "$CINDER_TARGET_HELPER" = "lioadm" ]; then
sudo cinder-rtstool get-targets | sudo xargs -rn 1 cinder-rtstool delete
else
elif [ "$CINDER_TARGET_HELPER" = "tgtadm" ]; then
sudo tgtadm --op show --mode target | awk '/Target/ {print $3}' | sudo xargs -r -n1 tgt-admin --delete
elif [ "$CINDER_TARGET_HELPER" = "nvmet" ]; then
# If we don't disconnect everything vgremove will block
sudo nvme disconnect-all
sudo nvmetcli clear
fi
_clean_lvm_volume_group $vg
}

View File

@ -97,6 +97,12 @@ NOVA_SERVICE_LISTEN_ADDRESS=${NOVA_SERVICE_LISTEN_ADDRESS:-$(ipv6_unquote $SERVI
METADATA_SERVICE_PORT=${METADATA_SERVICE_PORT:-8775}
NOVA_ENABLE_CACHE=${NOVA_ENABLE_CACHE:-True}
if [[ $SERVICE_IP_VERSION == 6 ]]; then
NOVA_MY_IP="$HOST_IPV6"
else
NOVA_MY_IP="$HOST_IP"
fi
# Option to enable/disable config drive
# NOTE: Set ``FORCE_CONFIG_DRIVE="False"`` to turn OFF config drive
FORCE_CONFIG_DRIVE=${FORCE_CONFIG_DRIVE:-"False"}
@ -219,6 +225,9 @@ function cleanup_nova {
done
sudo iscsiadm --mode node --op delete || true
# Disconnect all nvmeof connections
sudo nvme disconnect-all || true
# Clean out the instances directory.
sudo rm -rf $NOVA_INSTANCES_PATH/*
fi
@ -306,6 +315,7 @@ function configure_nova {
fi
fi
# Due to cinder bug #1966513 we ALWAYS need an initiator name for LVM
# Ensure each compute host uses a unique iSCSI initiator
echo InitiatorName=$(iscsi-iname) | sudo tee /etc/iscsi/initiatorname.iscsi
@ -326,8 +336,28 @@ EOF
# not work under FIPS.
iniset -sudo /etc/iscsi/iscsid.conf DEFAULT "node.session.auth.chap_algs" "SHA3-256,SHA256"
# ensure that iscsid is started, even when disabled by default
restart_service iscsid
if [[ $CINDER_TARGET_HELPER != 'nvmet' ]]; then
# ensure that iscsid is started, even when disabled by default
restart_service iscsid
# For NVMe-oF we need different packages that many not be present
else
install_package nvme-cli
sudo modprobe nvme-fabrics
# Ensure NVMe is ready and create the Soft-RoCE device over the networking interface
if [[ $CINDER_TARGET_PROTOCOL == 'nvmet_rdma' ]]; then
sudo modprobe nvme-rdma
iface=${HOST_IP_IFACE:-`ip -br -$SERVICE_IP_VERSION a | grep $NOVA_MY_IP | awk '{print $1}'`}
if ! sudo rdma link | grep $iface ; then
sudo rdma link add rxe_$iface type rxe netdev $iface
fi
elif [[ $CINDER_TARGET_PROTOCOL == 'nvmet_tcp' ]]; then
sudo modprobe nvme-tcp
else # 'nvmet_fc'
sudo modprobe nvme-fc
fi
fi
fi
# Rebuild the config file from scratch
@ -418,11 +448,7 @@ function create_nova_conf {
iniset $NOVA_CONF filter_scheduler enabled_filters "$NOVA_FILTERS"
iniset $NOVA_CONF scheduler workers "$API_WORKERS"
iniset $NOVA_CONF neutron default_floating_pool "$PUBLIC_NETWORK_NAME"
if [[ $SERVICE_IP_VERSION == 6 ]]; then
iniset $NOVA_CONF DEFAULT my_ip "$HOST_IPV6"
else
iniset $NOVA_CONF DEFAULT my_ip "$HOST_IP"
fi
iniset $NOVA_CONF DEFAULT my_ip "$NOVA_MY_IP"
iniset $NOVA_CONF DEFAULT instance_name_template "${INSTANCE_NAME_PREFIX}%08x"
iniset $NOVA_CONF DEFAULT osapi_compute_listen "$NOVA_SERVICE_LISTEN_ADDRESS"
iniset $NOVA_CONF DEFAULT metadata_listen "$NOVA_SERVICE_LISTEN_ADDRESS"