Use force=True for os-brick disconnect during delete

The 'force' parameter of os-brick's disconnect_volume() method allows
callers to ignore flushing errors and ensure that devices are being
removed from the host.

We should use force=True when we are going to delete an instance to
avoid leaving leftover devices connected to the compute host which
could then potentially be reused to map to volumes to an instance that
should not have access to those volumes.

We can use force=True even when disconnecting a volume that will not be
deleted on termination because os-brick will always attempt to flush
and disconnect gracefully before forcefully removing devices.

Closes-Bug: #2004555

Change-Id: I3629b84d3255a8fe9d8a7cea8c6131d7c40899e8
This commit is contained in:
melanie witt 2023-02-15 22:37:40 +00:00 committed by Dan Smith
parent 105afb338b
commit db455548a1
39 changed files with 413 additions and 114 deletions

View File

@ -284,7 +284,7 @@ Troubleshooting
Timeouts
~~~~~~~~
Configure a :ref:`service user <user_token_timeout>` in case the user token
Configure a :ref:`service user <service_user_token>` in case the user token
times out, e.g. during the snapshot and download of a large server image.
If RPC calls are timing out with a ``MessagingTimeout`` error in the logs,

View File

@ -19,6 +19,7 @@ A list of config options based on different topics can be found below:
.. toctree::
:maxdepth: 1
/admin/configuration/service-user-token
/admin/configuration/api
/admin/configuration/resize
/admin/configuration/cross-cell-resize

View File

@ -0,0 +1,59 @@
.. _service_user_token:
===================
Service User Tokens
===================
.. note::
Configuration of service user tokens is **required** for every Nova service
for security reasons. See https://bugs.launchpad.net/nova/+bug/2004555 for
details.
Configure Nova to send service user tokens alongside regular user tokens when
making REST API calls to other services. The identity service (Keystone) will
authenticate a request using the service user token if the regular user token
has expired.
This is important when long-running operations such as live migration or
snapshot take long enough to exceed the expiry of the user token. Without the
service token, if a long-running operation exceeds the expiry of the user
token, post operations such as cleanup after a live migration could fail when
Nova calls other service APIs like block-storage (Cinder) or networking
(Neutron).
The service token is also used by services to validate whether the API caller
is a service. Some service APIs are restricted to service users only.
To set up service tokens, create a ``nova`` service user and ``service`` role
in the identity service (Keystone) and assign the ``service`` role to the
``nova`` service user.
Then, configure the :oslo.config:group:`service_user` section of the Nova
configuration file, for example:
.. code-block:: ini
[service_user]
send_service_user_token = true
auth_url = https://104.130.216.102/identity
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = secretservice
...
And configure the other identity options as necessary for the service user,
much like you would configure nova to work with the image service (Glance) or
networking service (Neutron).
.. note::
Please note that the role assigned to the :oslo.config:group:`service_user`
needs to be in the configured
:oslo.config:option:`keystone_authtoken.service_token_roles` of other
services such as block-storage (Cinder), image (Glance), and networking
(Neutron).

View File

@ -320,4 +320,4 @@ To make live-migration succeed, you have several options:
If live migrations routinely timeout or fail during cleanup operations due
to the user token timing out, consider configuring nova to use
:ref:`service user tokens <user_token_timeout>`.
:ref:`service user tokens <service_user_token>`.

View File

@ -67,7 +67,7 @@ Create a snapshot of the instance
If snapshot operations routinely fail because the user token times out
while uploading a large disk image, consider configuring nova to use
:ref:`service user tokens <user_token_timeout>`.
:ref:`service user tokens <service_user_token>`.
#. Use the :command:`openstack image list` command to check the status
until the status is ``ACTIVE``:

View File

@ -478,67 +478,3 @@ Ensure the ``compute`` endpoint in the identity service catalog is pointing
at ``/v2.1`` instead of ``/v2``. The former route supports microversions,
while the latter route is considered the legacy v2.0 compatibility-mode
route which renders all requests as if they were made on the legacy v2.0 API.
.. _user_token_timeout:
User token times out during long-running operations
---------------------------------------------------
Problem
~~~~~~~
Long-running operations such as live migration or snapshot can sometimes
overrun the expiry of the user token. In such cases, post operations such
as cleaning up after a live migration can fail when the nova-compute service
needs to cleanup resources in other services, such as in the block-storage
(cinder) or networking (neutron) services.
For example:
.. code-block:: console
2018-12-17 13:47:29.591 16987 WARNING nova.virt.libvirt.migration [req-7bc758de-b2e4-461b-a971-f79be6cd4703 313d1247d7b845da9c731eec53e50a26 2f693c782fa748c2baece8db95b4ba5b - default default] [instance: ead8ecc3-f473-4672-a67b-c44534c6042d] Live migration not completed after 2400 sec
2018-12-17 13:47:30.097 16987 WARNING nova.virt.libvirt.driver [req-7bc758de-b2e4-461b-a971-f79be6cd4703 313d1247d7b845da9c731eec53e50a26 2f693c782fa748c2baece8db95b4ba5b - default default] [instance: ead8ecc3-f473-4672-a67b-c44534c6042d] Migration operation was cancelled
2018-12-17 13:47:30.299 16987 ERROR nova.virt.libvirt.driver [req-7bc758de-b2e4-461b-a971-f79be6cd4703 313d1247d7b845da9c731eec53e50a26 2f693c782fa748c2baece8db95b4ba5b - default default] [instance: ead8ecc3-f473-4672-a67b-c44534c6042d] Live Migration failure: operation aborted: migration job: canceled by client: libvirtError: operation aborted: migration job: canceled by client
2018-12-17 13:47:30.685 16987 INFO nova.compute.manager [req-7bc758de-b2e4-461b-a971-f79be6cd4703 313d1247d7b845da9c731eec53e50a26 2f693c782fa748c2baece8db95b4ba5b - default default] [instance: ead8ecc3-f473-4672-a67b-c44534c6042d] Swapping old allocation on 3e32d595-bd1f-4136-a7f4-c6703d2fbe18 held by migration 17bec61d-544d-47e0-a1c1-37f9d7385286 for instance
2018-12-17 13:47:32.450 16987 ERROR nova.volume.cinder [req-7bc758de-b2e4-461b-a971-f79be6cd4703 313d1247d7b845da9c731eec53e50a26 2f693c782fa748c2baece8db95b4ba5b - default default] Delete attachment failed for attachment 58997d5b-24f0-4073-819e-97916fb1ee19. Error: The request you have made requires authentication. (HTTP 401) Code: 401: Unauthorized: The request you have made requires authentication. (HTTP 401)
Solution
~~~~~~~~
Configure nova to use service user tokens to supplement the regular user token
used to initiate the operation. The identity service (keystone) will then
authenticate a request using the service user token if the user token has
already expired.
To use, create a service user in the identity service similar as you would when
creating the ``nova`` service user.
Then configure the :oslo.config:group:`service_user` section of the nova
configuration file, for example:
.. code-block:: ini
[service_user]
send_service_user_token = True
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
password = secretservice
username = nova
auth_url = https://104.130.216.102/identity
...
And configure the other identity options as necessary for the service user,
much like you would configure nova to work with the image service (glance)
or networking service.
.. note::
Please note that the role of the :oslo.config:group:`service_user` you
configure needs to be a superset of
:oslo.config:option:`keystone_authtoken.service_token_roles` (The option
:oslo.config:option:`keystone_authtoken.service_token_roles` is configured
in cinder, glance and neutron).

View File

@ -92,6 +92,26 @@ Install and configure components
Comment out or remove any other options in the ``[keystone_authtoken]``
section.
* In the ``[service_user]`` section, configure :ref:`service user
tokens <service_user_token>`:
.. path /etc/nova/nova.conf
.. code-block:: ini
[service_user]
send_service_user_token = true
auth_url = https://controller/identity
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
the Identity service.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
.. path /etc/nova/nova.conf

View File

@ -84,6 +84,26 @@ Install and configure components
Comment out or remove any other options in the ``[keystone_authtoken]``
section.
* In the ``[service_user]`` section, configure :ref:`service user
tokens <service_user_token>`:
.. path /etc/nova/nova.conf
.. code-block:: ini
[service_user]
send_service_user_token = true
auth_url = https://controller/identity
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
the Identity service.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
.. path /etc/nova/nova.conf

View File

@ -74,6 +74,26 @@ Install and configure components
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
* In the ``[service_user]`` section, configure :ref:`service user
tokens <service_user_token>`:
.. path /etc/nova/nova.conf
.. code-block:: ini
[service_user]
send_service_user_token = true
auth_url = https://controller/identity
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
the Identity service.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
.. path /etc/nova/nova.conf

View File

@ -260,6 +260,26 @@ Install and configure components
Comment out or remove any other options in the ``[keystone_authtoken]``
section.
* In the ``[service_user]`` section, configure :ref:`service user
tokens <service_user_token>`:
.. path /etc/nova/nova.conf
.. code-block:: ini
[service_user]
send_service_user_token = true
auth_url = https://controller/identity
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
the Identity service.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the
management interface IP address of the controller node:

View File

@ -247,6 +247,26 @@ Install and configure components
Comment out or remove any other options in the ``[keystone_authtoken]``
section.
* In the ``[service_user]`` section, configure :ref:`service user
tokens <service_user_token>`:
.. path /etc/nova/nova.conf
.. code-block:: ini
[service_user]
send_service_user_token = true
auth_url = https://controller/identity
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
the Identity service.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the
management interface IP address of the controller node:

View File

@ -237,6 +237,26 @@ Install and configure components
Comment out or remove any other options in the ``[keystone_authtoken]``
section.
* In the ``[service_user]`` section, configure :ref:`service user
tokens <service_user_token>`:
.. path /etc/nova/nova.conf
.. code-block:: ini
[service_user]
send_service_user_token = true
auth_url = https://controller/identity
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
the Identity service.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the
management interface IP address of the controller node:

View File

@ -271,6 +271,15 @@ https://docs.openstack.org/latest/nova/admin/hw_machine_type.html"""))
return upgradecheck.Result(upgradecheck.Code.SUCCESS)
def _check_service_user_token(self):
if not CONF.service_user.send_service_user_token:
msg = (_("""
Service user token configuration is required for all Nova services.
For more details see the following:
https://docs.openstack.org/latest/nova/admin/configuration/service-user-token.html""")) # noqa
return upgradecheck.Result(upgradecheck.Code.FAILURE, msg)
return upgradecheck.Result(upgradecheck.Code.SUCCESS)
# The format of the check functions is to return an upgradecheck.Result
# object with the appropriate upgradecheck.Code and details set. If the
# check hits warnings or failures then those should be stored in the
@ -294,6 +303,8 @@ https://docs.openstack.org/latest/nova/admin/hw_machine_type.html"""))
(_('Older than N-1 computes'), _check_old_computes),
# Added in Wallaby
(_('hw_machine_type unset'), _check_machine_type_set),
# Added in Bobcat
(_('Service User Token Configuration'), _check_service_user_token),
)

View File

@ -446,3 +446,19 @@ class TestCheckMachineTypeUnset(test.NoDBTestCase):
upgradecheck.Code.SUCCESS,
result.code
)
class TestUpgradeCheckServiceUserToken(test.NoDBTestCase):
def setUp(self):
super().setUp()
self.cmd = status.UpgradeCommands()
def test_service_user_token_not_configured(self):
result = self.cmd._check_service_user_token()
self.assertEqual(upgradecheck.Code.FAILURE, result.code)
def test_service_user_token_configured(self):
self.flags(send_service_user_token=True, group='service_user')
result = self.cmd._check_service_user_token()
self.assertEqual(upgradecheck.Code.SUCCESS, result.code)

View File

@ -1129,7 +1129,7 @@ class VMOpsTestCase(test_base.HyperVBaseTestCase):
mock_unplug_vifs.assert_called_once_with(
mock_instance, mock.sentinel.fake_network_info)
mock_disconnect_volumes.assert_called_once_with(
mock.sentinel.FAKE_BD_INFO)
mock.sentinel.FAKE_BD_INFO, force=True)
mock_delete_disk_files.assert_called_once_with(
mock_instance.name)

View File

@ -141,7 +141,13 @@ class VolumeOpsTestCase(test_base.HyperVBaseTestCase):
self._volumeops.disconnect_volumes(block_device_info)
fake_volume_driver.disconnect_volume.assert_called_once_with(
block_device_mapping[0]['connection_info'])
block_device_mapping[0]['connection_info'], force=False)
# Verify force=True
fake_volume_driver.disconnect_volume.reset_mock()
self._volumeops.disconnect_volumes(block_device_info, force=True)
fake_volume_driver.disconnect_volume.assert_called_once_with(
block_device_mapping[0]['connection_info'], force=True)
@mock.patch('time.sleep')
@mock.patch.object(volumeops.VolumeOps, '_get_volume_driver')
@ -181,7 +187,7 @@ class VolumeOpsTestCase(test_base.HyperVBaseTestCase):
if attach_failed:
fake_volume_driver.disconnect_volume.assert_called_once_with(
fake_conn_info)
fake_conn_info, force=False)
mock_sleep.assert_has_calls(
[mock.call(CONF.hyperv.volume_attach_retry_interval)] *
CONF.hyperv.volume_attach_retry_count)
@ -203,7 +209,13 @@ class VolumeOpsTestCase(test_base.HyperVBaseTestCase):
mock_get_volume_driver.assert_called_once_with(
mock.sentinel.conn_info)
fake_volume_driver.disconnect_volume.assert_called_once_with(
mock.sentinel.conn_info)
mock.sentinel.conn_info, force=False)
# Verify force=True
fake_volume_driver.disconnect_volume.reset_mock()
self._volumeops.disconnect_volume(mock.sentinel.conn_info, force=True)
fake_volume_driver.disconnect_volume.assert_called_once_with(
mock.sentinel.conn_info, force=True)
@mock.patch.object(volumeops.VolumeOps, '_get_volume_driver')
def test_detach_volume(self, mock_get_volume_driver):
@ -347,7 +359,13 @@ class BaseVolumeDriverTestCase(test_base.HyperVBaseTestCase):
self._base_vol_driver.disconnect_volume(conn_info)
self._conn.disconnect_volume.assert_called_once_with(
conn_info['data'])
conn_info['data'], force=False)
# Verify force=True
self._conn.disconnect_volume.reset_mock()
self._base_vol_driver.disconnect_volume(conn_info, force=True)
self._conn.disconnect_volume.assert_called_once_with(
conn_info['data'], force=True)
@mock.patch.object(volumeops.BaseVolumeDriver, '_get_disk_res_path')
def _test_get_disk_resource_path_by_conn_info(self,

View File

@ -9722,7 +9722,7 @@ class LibvirtConnTestCase(test.NoDBTestCase,
drvr._disconnect_volume(
self.context, fake_connection_info, fake_instance_1)
mock_volume_driver.disconnect_volume.assert_called_once_with(
fake_connection_info, fake_instance_1)
fake_connection_info, fake_instance_1, force=False)
@mock.patch.object(libvirt_driver.LibvirtDriver, '_detach_encryptor')
@mock.patch('nova.objects.InstanceList.get_uuids_by_host')
@ -10096,7 +10096,12 @@ class LibvirtConnTestCase(test.NoDBTestCase,
device_name='vdc',
),
mock.call.detach_encryptor(**encryption),
mock.call.disconnect_volume(connection_info, instance)])
mock.call.disconnect_volume(
connection_info,
instance,
force=False,
)
])
get_device_conf_func = mock_detach_with_retry.mock_calls[0][1][2]
self.assertEqual(mock_guest.get_disk, get_device_conf_func.func)
self.assertEqual(('vdc',), get_device_conf_func.args)
@ -20474,16 +20479,64 @@ class LibvirtConnTestCase(test.NoDBTestCase,
self.context,
mock.sentinel.connection_info,
instance,
destroy_secrets=False
destroy_secrets=False,
force=True
),
mock.call(
self.context,
mock.sentinel.connection_info,
instance,
destroy_secrets=True
destroy_secrets=True,
force=True
)
])
@mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver')
@mock.patch(
'nova.virt.libvirt.driver.LibvirtDriver._should_disconnect_target',
new=mock.Mock(return_value=True))
@mock.patch('nova.virt.libvirt.driver.LibvirtDriver._detach_encryptor',
new=mock.Mock())
@mock.patch('nova.virt.libvirt.driver.LibvirtDriver._undefine_domain',
new=mock.Mock())
@mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_vpmems',
new=mock.Mock(return_value=None))
def test_cleanup_disconnect_volume(self, mock_vol_driver):
"""Verify that we call disconnect_volume() with force=True
cleanup() is called by destroy() when an instance is being deleted and
force=True should be passed down to os-brick's disconnect_volume()
call, which will ensure removal of devices regardless of errors.
We need to ensure that devices are removed when an instance is being
deleted to avoid leaving leftover devices that could later be
erroneously connected by external entities (example: multipathd) to
instances that should not have access to the volumes.
See https://bugs.launchpad.net/nova/+bug/2004555 for details.
"""
connection_info = mock.MagicMock()
block_device_info = {
'block_device_mapping': [
{
'connection_info': connection_info
}
]
}
instance = objects.Instance(self.context, **self.test_instance)
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI())
drvr.cleanup(
self.context,
instance,
network_info={},
block_device_info=block_device_info,
destroy_vifs=False,
destroy_disks=False,
)
mock_vol_driver.return_value.disconnect_volume.assert_called_once_with(
connection_info, instance, force=True)
@mock.patch.object(libvirt_driver.LibvirtDriver, '_get_volume_encryption')
@mock.patch.object(libvirt_driver.LibvirtDriver, '_allow_native_luksv1')
def test_swap_volume_native_luks_blocked(self, mock_allow_native_luksv1,

View File

@ -81,3 +81,23 @@ class LibvirtFibreChannelVolumeDriverTestCase(
self.assertEqual(requested_size, new_size)
libvirt_driver.connector.extend_volume.assert_called_once_with(
connection_info['data'])
def test_disconnect_volume(self):
device_path = '/dev/fake-dev'
connection_info = {'data': {'device_path': device_path}}
libvirt_driver = fibrechannel.LibvirtFibreChannelVolumeDriver(
self.fake_host)
libvirt_driver.connector.disconnect_volume = mock.MagicMock()
libvirt_driver.disconnect_volume(
connection_info, mock.sentinel.instance)
libvirt_driver.connector.disconnect_volume.assert_called_once_with(
connection_info['data'], connection_info['data'], force=False)
# Verify force=True
libvirt_driver.connector.disconnect_volume.reset_mock()
libvirt_driver.disconnect_volume(
connection_info, mock.sentinel.instance, force=True)
libvirt_driver.connector.disconnect_volume.assert_called_once_with(
connection_info['data'], connection_info['data'], force=True)

View File

@ -57,10 +57,19 @@ class LibvirtISCSIVolumeDriverTestCase(
device=device_path))
libvirt_driver.disconnect_volume(connection_info,
mock.sentinel.instance)
libvirt_driver.connector.disconnect_volume.assert_called_once_with(
connection_info['data'], None, force=False)
msg = mock_LOG_warning.call_args_list[0]
self.assertIn('Ignoring VolumeDeviceNotFound', msg[0][0])
# Verify force=True
libvirt_driver.connector.disconnect_volume.reset_mock()
libvirt_driver.disconnect_volume(
connection_info, mock.sentinel.instance, force=True)
libvirt_driver.connector.disconnect_volume.assert_called_once_with(
connection_info['data'], None, force=True)
def test_extend_volume(self):
device_path = '/dev/fake-dev'
connection_info = {'data': {'device_path': device_path}}

View File

@ -62,7 +62,13 @@ class LibvirtLightVolumeDriverTestCase(test_volume.LibvirtVolumeBaseTestCase):
connection_info = {'data': disk_info}
lightos_driver.disconnect_volume(connection_info, None)
lightos_driver.connector.disconnect_volume.assert_called_once_with(
disk_info, None)
disk_info, None, force=False)
# Verify force=True
lightos_driver.connector.disconnect_volume.reset_mock()
lightos_driver.disconnect_volume(connection_info, None, force=True)
lightos_driver.connector.disconnect_volume.assert_called_once_with(
disk_info, None, force=True)
@mock.patch('os_brick.initiator.connector.InitiatorConnector.factory',
new=mock.Mock(return_value=mock.Mock()))

View File

@ -77,7 +77,13 @@ class LibvirtNVMEVolumeDriverTestCase(test_volume.LibvirtVolumeBaseTestCase):
connection_info = {'data': disk_info}
nvme_driver.disconnect_volume(connection_info, None)
nvme_driver.connector.disconnect_volume.assert_called_once_with(
disk_info, None)
disk_info, None, force=False)
# Verify force=True
nvme_driver.connector.disconnect_volume.reset_mock()
nvme_driver.disconnect_volume(connection_info, None, force=True)
nvme_driver.connector.disconnect_volume.assert_called_once_with(
disk_info, None, force=True)
@mock.patch('os_brick.initiator.connector.InitiatorConnector.factory',
new=mock.Mock(return_value=mock.Mock()))

View File

@ -49,7 +49,13 @@ class LibvirtScaleIOVolumeDriverTestCase(
conn = {'data': mock.sentinel.conn_data}
sio.disconnect_volume(conn, mock.sentinel.instance)
sio.connector.disconnect_volume.assert_called_once_with(
mock.sentinel.conn_data, None)
mock.sentinel.conn_data, None, force=False)
# Verify force=True
sio.connector.disconnect_volume.reset_mock()
sio.disconnect_volume(conn, mock.sentinel.instance, force=True)
sio.connector.disconnect_volume.assert_called_once_with(
mock.sentinel.conn_data, None, force=True)
@mock.patch('os_brick.initiator.connector.InitiatorConnector.factory',
new=mock.Mock(return_value=mock.Mock()))

View File

@ -53,9 +53,11 @@ class MockStorPoolConnector(object):
}
return {'type': 'block', 'path': test_attached[v]['path']}
def disconnect_volume(self, connection_info, device_info):
def disconnect_volume(self, connection_info, device_info, **kwargs):
self.inst.assertIn('client_id', connection_info)
self.inst.assertIn('volume', connection_info)
self.inst.assertIn('force', kwargs)
self.inst.assertEqual(self.inst.force, kwargs.get('force'))
v = connection_info['volume']
if v not in test_attached:
@ -86,6 +88,11 @@ class MockStorPoolInitiator(object):
class LibvirtStorPoolVolumeDriverTestCase(
test_volume.LibvirtVolumeBaseTestCase):
def setUp(self):
super().setUp()
# This is for testing the force flag of disconnect_volume()
self.force = False
def mock_storpool(f):
def _config_inner_inner1(inst, *args, **kwargs):
@mock.patch(
@ -175,3 +182,10 @@ class LibvirtStorPoolVolumeDriverTestCase(
libvirt_driver.disconnect_volume(ci_2, mock.sentinel.instance)
self.assertDictEqual({}, test_attached)
# Connect the volume again so we can detach it again
libvirt_driver.connect_volume(ci_2, mock.sentinel.instance)
# Verify force=True
self.force = True
libvirt_driver.disconnect_volume(
ci_2, mock.sentinel.instance, force=True)

View File

@ -95,7 +95,13 @@ class LibvirtVZStorageTestCase(test_volume.LibvirtVolumeBaseTestCase):
conn = {'data': mock.sentinel.conn_data}
drv.disconnect_volume(conn, mock.sentinel.instance)
drv.connector.disconnect_volume.assert_called_once_with(
mock.sentinel.conn_data, None)
mock.sentinel.conn_data, None, force=False)
# Verify force=True
drv.connector.disconnect_volume.reset_mock()
drv.disconnect_volume(conn, mock.sentinel.instance, force=True)
drv.connector.disconnect_volume.assert_called_once_with(
mock.sentinel.conn_data, None, force=True)
def test_libvirt_vzstorage_driver_get_config(self):
libvirt_driver = vzstorage.LibvirtVZStorageVolumeDriver(self.fake_host)

View File

@ -747,7 +747,7 @@ class VMOps(object):
# should be disconnected even if the VM doesn't exist anymore,
# so they are not leaked.
self.unplug_vifs(instance, network_info)
self._volumeops.disconnect_volumes(block_device_info)
self._volumeops.disconnect_volumes(block_device_info, force=True)
if destroy_disks:
self._delete_disk_files(instance_name)

View File

@ -59,10 +59,10 @@ class VolumeOps(object):
for vol in volumes:
self.attach_volume(vol['connection_info'], instance_name)
def disconnect_volumes(self, block_device_info):
def disconnect_volumes(self, block_device_info, force=False):
mapping = driver.block_device_info_get_mapping(block_device_info)
for vol in mapping:
self.disconnect_volume(vol['connection_info'])
self.disconnect_volume(vol['connection_info'], force=force)
def attach_volume(self, connection_info, instance_name,
disk_bus=constants.CTRL_TYPE_SCSI):
@ -116,9 +116,9 @@ class VolumeOps(object):
volume_driver.set_disk_qos_specs(connection_info,
qos_specs)
def disconnect_volume(self, connection_info):
def disconnect_volume(self, connection_info, force=False):
volume_driver = self._get_volume_driver(connection_info)
volume_driver.disconnect_volume(connection_info)
volume_driver.disconnect_volume(connection_info, force=force)
def detach_volume(self, connection_info, instance_name):
LOG.debug("Detaching volume: %(connection_info)s "
@ -231,8 +231,8 @@ class BaseVolumeDriver(object):
def connect_volume(self, connection_info):
return self._connector.connect_volume(connection_info['data'])
def disconnect_volume(self, connection_info):
self._connector.disconnect_volume(connection_info['data'])
def disconnect_volume(self, connection_info, force=False):
self._connector.disconnect_volume(connection_info['data'], force=force)
def get_disk_resource_path(self, connection_info):
disk_paths = self._connector.get_volume_paths(connection_info['data'])

View File

@ -1657,7 +1657,7 @@ class LibvirtDriver(driver.ComputeDriver):
try:
self._disconnect_volume(
context, connection_info, instance,
destroy_secrets=destroy_secrets)
destroy_secrets=destroy_secrets, force=True)
except Exception as exc:
with excutils.save_and_reraise_exception() as ctxt:
if cleanup_instance_disks:
@ -1974,7 +1974,7 @@ class LibvirtDriver(driver.ComputeDriver):
return (False if connection_count > 1 else True)
def _disconnect_volume(self, context, connection_info, instance,
encryption=None, destroy_secrets=True):
encryption=None, destroy_secrets=True, force=False):
self._detach_encryptor(
context,
connection_info,
@ -1986,7 +1986,8 @@ class LibvirtDriver(driver.ComputeDriver):
multiattach = connection_info.get('multiattach', False)
if self._should_disconnect_target(
context, instance, multiattach, vol_driver, volume_id):
vol_driver.disconnect_volume(connection_info, instance)
vol_driver.disconnect_volume(
connection_info, instance, force=force)
else:
LOG.info('Detected multiple connections on this host for '
'volume: %(volume)s, skipping target disconnect.',

View File

@ -59,7 +59,7 @@ class LibvirtFibreChannelVolumeDriver(libvirt_volume.LibvirtBaseVolumeDriver):
connection_info['data']['multipath_id'] = \
device_info['multipath_id']
def disconnect_volume(self, connection_info, instance):
def disconnect_volume(self, connection_info, instance, force=False):
"""Detach the volume from instance_name."""
LOG.debug("calling os-brick to detach FC Volume", instance=instance)
@ -69,11 +69,12 @@ class LibvirtFibreChannelVolumeDriver(libvirt_volume.LibvirtBaseVolumeDriver):
# the 2nd param of disconnect_volume and be consistent
# with the rest of the connectors.
self.connector.disconnect_volume(connection_info['data'],
connection_info['data'])
connection_info['data'],
force=force)
LOG.debug("Disconnected FC Volume", instance=instance)
super(LibvirtFibreChannelVolumeDriver,
self).disconnect_volume(connection_info, instance)
self).disconnect_volume(connection_info, instance, force=force)
def extend_volume(self, connection_info, instance, requested_size):
"""Extend the volume."""

View File

@ -116,7 +116,7 @@ class LibvirtMountedFileSystemVolumeDriver(LibvirtBaseFileSystemVolumeDriver,
connection_info['data']['device_path'] = \
self._get_device_path(connection_info)
def disconnect_volume(self, connection_info, instance):
def disconnect_volume(self, connection_info, instance, force=False):
"""Disconnect the volume."""
vol_name = connection_info['data']['name']
mountpoint = self._get_mount_path(connection_info)

View File

@ -66,19 +66,20 @@ class LibvirtISCSIVolumeDriver(libvirt_volume.LibvirtBaseVolumeDriver):
connection_info['data']['device_path'] = device_info['path']
def disconnect_volume(self, connection_info, instance):
def disconnect_volume(self, connection_info, instance, force=False):
"""Detach the volume from instance_name."""
LOG.debug("calling os-brick to detach iSCSI Volume", instance=instance)
try:
self.connector.disconnect_volume(connection_info['data'], None)
self.connector.disconnect_volume(
connection_info['data'], None, force=force)
except os_brick_exception.VolumeDeviceNotFound as exc:
LOG.warning('Ignoring VolumeDeviceNotFound: %s', exc)
return
LOG.debug("Disconnected iSCSI Volume", instance=instance)
super(LibvirtISCSIVolumeDriver,
self).disconnect_volume(connection_info, instance)
self).disconnect_volume(connection_info, instance, force=force)
def extend_volume(self, connection_info, instance, requested_size):
"""Extend the volume."""

View File

@ -42,14 +42,15 @@ class LibvirtLightOSVolumeDriver(libvirt_volume.LibvirtVolumeDriver):
LOG.debug("Connecting NVMe volume with device_info %s", device_info)
connection_info['data']['device_path'] = device_info['path']
def disconnect_volume(self, connection_info, instance):
def disconnect_volume(self, connection_info, instance, force=False):
"""Detach the volume from the instance."""
LOG.debug("Disconnecting NVMe disk. instance:%s, volume_id:%s",
connection_info.get("instance", ""),
connection_info.get("volume_id", ""))
self.connector.disconnect_volume(connection_info['data'], None)
self.connector.disconnect_volume(
connection_info['data'], None, force=force)
super(LibvirtLightOSVolumeDriver, self).disconnect_volume(
connection_info, instance)
connection_info, instance, force=force)
def extend_volume(self, connection_info, instance, requested_size=None):
"""Extend the volume."""

View File

@ -45,13 +45,13 @@ class LibvirtNVMEVolumeDriver(libvirt_volume.LibvirtVolumeDriver):
connection_info['data']['device_path'] = device_info['path']
def disconnect_volume(self, connection_info, instance):
def disconnect_volume(self, connection_info, instance, force=False):
"""Detach the volume from the instance."""
LOG.debug("Disconnecting NVMe disk", instance=instance)
self.connector.disconnect_volume(
connection_info['data'], None)
connection_info['data'], None, force=force)
super(LibvirtNVMEVolumeDriver,
self).disconnect_volume(connection_info, instance)
self).disconnect_volume(connection_info, instance, force=force)
def extend_volume(self, connection_info, instance, requested_size):
"""Extend the volume."""

View File

@ -189,7 +189,7 @@ class LibvirtQuobyteVolumeDriver(fs.LibvirtBaseFileSystemVolumeDriver):
instance=instance)
@utils.synchronized('connect_qb_volume')
def disconnect_volume(self, connection_info, instance):
def disconnect_volume(self, connection_info, instance, force=False):
"""Disconnect the volume."""
mount_path = self._get_mount_path(connection_info)

View File

@ -57,12 +57,13 @@ class LibvirtScaleIOVolumeDriver(libvirt_volume.LibvirtBaseVolumeDriver):
instance=instance)
connection_info['data']['device_path'] = device_info['path']
def disconnect_volume(self, connection_info, instance):
self.connector.disconnect_volume(connection_info['data'], None)
def disconnect_volume(self, connection_info, instance, force=False):
self.connector.disconnect_volume(
connection_info['data'], None, force=force)
LOG.debug("Disconnected volume", instance=instance)
super(LibvirtScaleIOVolumeDriver, self).disconnect_volume(
connection_info, instance)
connection_info, instance, force=force)
def extend_volume(self, connection_info, instance, requested_size):
LOG.debug("calling os-brick to extend ScaleIO Volume",

View File

@ -52,7 +52,7 @@ class LibvirtSMBFSVolumeDriver(fs.LibvirtBaseFileSystemVolumeDriver):
device_path = self._get_device_path(connection_info)
connection_info['data']['device_path'] = device_path
def disconnect_volume(self, connection_info, instance):
def disconnect_volume(self, connection_info, instance, force=False):
"""Disconnect the volume."""
smbfs_share = connection_info['data']['export']
mount_path = self._get_mount_path(connection_info)

View File

@ -47,10 +47,11 @@ class LibvirtStorPoolVolumeDriver(libvirt_volume.LibvirtVolumeDriver):
device_info, instance=instance)
connection_info['data']['device_path'] = device_info['path']
def disconnect_volume(self, connection_info, instance):
def disconnect_volume(self, connection_info, instance, force=False):
LOG.debug("Detaching StorPool volume %s",
connection_info['data']['volume'], instance=instance)
self.connector.disconnect_volume(connection_info['data'], None)
self.connector.disconnect_volume(
connection_info['data'], None, force=force)
LOG.debug("Detached StorPool volume", instance=instance)
def extend_volume(self, connection_info, instance, requested_size):

View File

@ -135,7 +135,7 @@ class LibvirtBaseVolumeDriver(object):
"""Connect the volume."""
pass
def disconnect_volume(self, connection_info, instance):
def disconnect_volume(self, connection_info, instance, force=False):
"""Disconnect the volume."""
pass

View File

@ -126,9 +126,10 @@ class LibvirtVZStorageVolumeDriver(fs.LibvirtBaseFileSystemVolumeDriver):
return _connect_volume(connection_info, instance)
def disconnect_volume(self, connection_info, instance):
def disconnect_volume(self, connection_info, instance, force=False):
"""Detach the volume from instance_name."""
LOG.debug("calling os-brick to detach Vzstorage Volume",
instance=instance)
self.connector.disconnect_volume(connection_info['data'], None)
self.connector.disconnect_volume(
connection_info['data'], None, force=force)
LOG.debug("Disconnected Vzstorage Volume", instance=instance)

View File

@ -0,0 +1,11 @@
upgrade:
- |
Configuration of service user tokens is now **required** for all Nova services
to ensure security of block-storage volume data.
All Nova configuration files must configure the ``[service_user]`` section as
described in the `documentation`__.
See https://bugs.launchpad.net/nova/+bug/2004555 for more details.
__ https://docs.openstack.org/nova/latest/admin/configuration/service-user-token.html