Follow up for unified limits: PCPU and documentation

This addresses comments from code review to add handling of PCPU during
the migration/copy of limits from the Nova database to Keystone. In
legacy quotas, there is no settable quota limit for PCPU, so the limit
for VCPU is used for PCPU. With unified limits, PCPU will have its own
quota limit, so for the automated migration command, we will simply
create a dedicated limit for PCPU that is the same value as the limit
for VCPU.

On the docs side, this adds more detail about the token authorization
settings needed to use the nova-manage limits migrate_to_unified_limits
CLI command and documents more OSC limit commands like show and delete.

Related to blueprint unified-limits-nova-tool-and-docs

Change-Id: Ifdb1691d7b25d28216d26479418ea323476fee1a
This commit is contained in:
melanie witt 2023-08-31 23:28:10 +00:00
parent 8f0817f078
commit d42fe462be
5 changed files with 173 additions and 18 deletions

View File

@ -135,6 +135,12 @@ To list all default quotas for a project, run:
This lists default quotas for all services and not just nova.
To show details about a default limit, run:
.. code-block:: console
$ openstack registered limit show <registered-limit-id>
To create a default quota limit, run:
.. code-block:: console
@ -149,12 +155,18 @@ To create a default quota limit, run:
.. _Keystone tokens documentation: https://docs.openstack.org/keystone/latest/admin/tokens-overview.html#operation_create_system_token
To update a default value, run:
To update a default quota value, run:
.. code-block:: console
$ openstack registered limit set --default-limit <value> <registered-limit-id>
To delete a default quota limit, run:
.. code-block:: console
$ openstack registered limit delete <registered-limit-id> [<registered-limit-id> ...]
View and update quota values for a project
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -175,18 +187,36 @@ token and run:
$ openstack limit list
To show details about a quota limit, run:
.. code-block:: console
$ openstack limit show <limit-id>
To create a quota limit for a project, run:
.. code-block:: console
$ openstack limit create --service nova --project <project> --resource-limit <value> <resource-name>
To update quotas for a project, run:
.. code-block:: console
$ openstack limit set --resource-limit <value> <limit-id>
To delete quotas for a project, run:
.. code-block:: console
$ openstack limit delete <limit-id> [<limit-id> ...]
Migration to unified limits quotas
----------------------------------
There is a `nova-manage limits migrate_to_unified_limits`_ command available
to help with moving from legacy Nova database quotas to Keystone unified limits
There is a `nova-manage limits migrate_to_unified_limits`_ command available to
help with moving from legacy Nova database quotas to Keystone unified limits
quotas. The command will read quota limits from the Nova database and call the
Keystone API to create the corresponding unified limits. Per-user quota limits
will **not** be copied into Keystone because per-user quotas are not supported

View File

@ -1778,6 +1778,46 @@ This command is useful for operators to migrate from legacy quotas to unified
limits. Limits are migrated by copying them from the Nova database to Keystone
by creating them using the Keystone API.
The Nova configuration file used by ``nova-manage`` must have a ``[keystone]``
section that contains authentication settings in order for the Keystone API
calls to succeed. As an example:
.. code-block:: ini
[keystone]
region_name = RegionOne
user_domain_name = Default
auth_url = http://127.0.0.1/identity
auth_type = password
username = admin
password = <password>
system_scope = all
By default `Keystone policy configuration`_, access to create, update, and
delete in the `unified limits API`_ is restricted to callers with
`system-scoped authorization tokens`_. The ``system_scope = all`` setting
indicates the scope for system operations. You will need to ensure that the
user configured under ``[keystone]`` has the necessary role and scope.
.. warning::
The ``limits migrate_to_unified_limits`` command will create limits only
for resources that exist in the legacy quota system and any resource that
does not have a unified limit in Keystone will use a quota limit of **0**.
For resource classes that are allocated by the placement service and have no
default limit set, you will need to create default limits manually. The most
common example is class:DISK_GB. All Nova API requests that need to allocate
DISK_GB will fail quota enforcement until a default limit for it is set in
Keystone.
See the :doc:`unified limits documentation
</admin/unified-limits>` about creating limits using the OpenStackClient.
.. _Keystone policy configuration: https://docs.openstack.org/keystone/latest/configuration/policy.html
.. _unified limits API: https://docs.openstack.org/api-ref/identity/v3/index.html#unified-limits
.. _system-scoped authorization tokens: https://docs.openstack.org/keystone/latest/admin/tokens-overview.html#system-scoped-tokens
.. versionadded:: 28.0.0 (2023.2 Bobcat)
.. rubric:: Options

View File

@ -20,6 +20,11 @@ API`_.
Types of quota
--------------
Unified limit resource names for resources that are tracked as `resource
classes`_ in the placement service follow the naming pattern of the ``class:``
prefix followed by the name of the resource class. For example: class:VCPU,
class:PCPU, class:MEMORY_MB, class:DISK_GB, class:VGPU.
.. list-table::
:header-rows: 1
:widths: 10 40
@ -27,7 +32,9 @@ Types of quota
* - Quota name
- Description
* - class:VCPU
- Number of instance cores (VCPUs) allowed per project.
- Number of shared CPU cores (VCPUs) allowed per project.
* - class:PCPU
- Number of dedicated CPU cores (PCPUs) allowed per project.
* - servers
- Number of instances allowed per project.
* - server_key_pairs
@ -42,9 +49,11 @@ Types of quota
- Number of servers per server group.
* - class:DISK_GB
- Gigabytes of instance disk allowed per project.
* - class:<any resource in the placement service>
- Any resource in the placement service that is allocated by Nova can have
a quota limit specified for it. Example: class:VGPU.
* - class:<any resource class in the placement service>
- Any resource class in the placement service that is allocated by Nova
can have a quota limit specified for it. Example: class:VGPU.
.. _resource classes: https://docs.openstack.org/os-resource-classes/latest
The following quotas were previously available but were removed in microversion
2.36 as they proxied information available from the networking service.
@ -125,6 +134,28 @@ For example:
| 17c4552c5aad4afca4813f37530fc897 | 8b22bf8a66fa4524a522b2a21865bbf2 | server_group_members | 10 | None | None |
+----------------------------------+----------------------------------+------------------------------------+---------------+-------------+-----------+
To show details about a default limit, run:
.. code-block:: console
$ openstack registered limit show <registered-limit-id>
For example:
.. code-block:: console
$ openstack registered limit show 8a658096236549788e61f4fcbd5a4a12
+---------------+----------------------------------+
| Field | Value |
+---------------+----------------------------------+
| default_limit | 20 |
| description | None |
| id | 8a658096236549788e61f4fcbd5a4a12 |
| region_id | None |
| resource_name | class:VCPU |
| service_id | 8b22bf8a66fa4524a522b2a21865bbf2 |
+---------------+----------------------------------+
To list the currently set quota values for your project, run:
.. code-block:: console
@ -141,3 +172,27 @@ For example:
+----------------------------------+----------------------------------+----------------------------------+---------------+----------------+-------------+-----------+
| 8b3364b2241e4090aaaa49355c7a5b56 | 5cd3281595a9497ba87209701cd9f3f2 | 8b22bf8a66fa4524a522b2a21865bbf2 | class:VCPU | 5 | None | None |
+----------------------------------+----------------------------------+----------------------------------+---------------+----------------+-------------+-----------+
To show details about a quota limimt, run:
.. code-block:: console
$ openstack limit show <limit-id>
For example:
.. code-block:: console
$ openstack limit show 8b3364b2241e4090aaaa49355c7a5b56
+----------------+----------------------------------+
| Field | Value |
+----------------+----------------------------------+
| description | None |
| domain_id | None |
| id | 8b3364b2241e4090aaaa49355c7a5b56 |
| project_id | 5cd3281595a9497ba87209701cd9f3f2 |
| region_id | None |
| resource_limit | 5 |
| resource_name | class:VCPU |
| service_id | 8b22bf8a66fa4524a522b2a21865bbf2 |
+----------------+----------------------------------+

View File

@ -3384,6 +3384,17 @@ class LimitsCommands():
zip(unified_to_legacy_names.values(),
unified_to_legacy_names.keys()))
# Handle the special case of PCPU. With legacy quotas, there is no
# dedicated quota limit for PCPUs, so they share the quota limit for
# VCPUs: 'cores'. With unified limits, class:PCPU has its own dedicated
# quota limit, so we will just mirror the limit for class:VCPU and
# create a limit with the same value for class:PCPU.
if 'cores' in legacy_defaults:
# Just make up a dummy legacy resource 'pcores' for this.
legacy_defaults['pcores'] = legacy_defaults['cores']
unified_to_legacy_names['class:PCPU'] = 'pcores'
legacy_to_unified_names['pcores'] = 'class:PCPU'
# For auth, a section for [keystone] is required in the config:
#
# [keystone]
@ -3448,6 +3459,11 @@ class LimitsCommands():
msg = f'Found project limits in the database: {legacy_projects} ...'
output(_(msg))
# Handle the special case of PCPU again for project limits.
if 'cores' in legacy_projects:
# Just make up a dummy legacy resource 'pcores' for this.
legacy_projects['pcores'] = legacy_projects['cores']
# Retrieve existing limits from Keystone.
project_limits = keystone_api.limits(
project_id=project_id, region_id=region_id)

View File

@ -2443,9 +2443,10 @@ class TestNovaManageLimits(test.TestCase):
mock_sdk.return_value.create_limit.side_effect = (
test.TestingException('oops!'))
# Create a couple of project limits.
# Create a few project limits.
objects.Quotas.create_limit(self.ctxt, uuids.project, 'ram', 8192)
objects.Quotas.create_limit(self.ctxt, uuids.project, 'instances', 25)
objects.Quotas.create_limit(self.ctxt, uuids.project, 'cores', 22)
return_code = self.cli.migrate_to_unified_limits(
project_id=uuids.project, verbose=True)
@ -2458,10 +2459,16 @@ class TestNovaManageLimits(test.TestCase):
# cores, ram, metadata_items, injected_files,
# injected_file_content_bytes, injected_file_path_length, key_pairs,
# server_groups, and server_group_members.
#
# And there is 1 default limit value automatically generated for PCPU
# based on 'cores'.
self.assertEqual(
10, mock_sdk.return_value.create_registered_limit.call_count)
11, mock_sdk.return_value.create_registered_limit.call_count)
self.assertEqual(2, mock_sdk.return_value.create_limit.call_count)
# We expect that we attempted to create 4 project limits:
# class:MEMORY_MB, servers, and class:VCPU = 3 + special case
# class:PCPU = 4.
self.assertEqual(4, mock_sdk.return_value.create_limit.call_count)
def test_migrate_to_unified_limits_already_exists(self):
# Create a couple of unified limits to already exist.
@ -2478,15 +2485,18 @@ class TestNovaManageLimits(test.TestCase):
self.cli.migrate_to_unified_limits(
project_id=uuids.project, verbose=True)
# There are 10 default limit values in the config options, so because a
# limit for 'servers' already exists, we should have only created 9.
# There are 10 default limit values in the config options +
# 1 special case for PCPU which will be added based on VCPU = 11.
# Because a limit for 'servers' already exists, we should have only
# created 10.
mock_sdk = self.ul_api.mock_sdk_adapter
self.assertEqual(
9, mock_sdk.create_registered_limit.call_count)
10, mock_sdk.create_registered_limit.call_count)
# There already exists a project limit for 'class:VCPU', so we should
# have created only 1 project limit.
self.assertEqual(1, mock_sdk.create_limit.call_count)
# have created only 2 project limits. One for 'servers' and one for
# special case 'class:PCPU' generated from VCPU.
self.assertEqual(2, mock_sdk.create_limit.call_count)
def test_migrate_to_unified_limits(self):
# Set some defaults using the config options.
@ -2534,10 +2544,14 @@ class TestNovaManageLimits(test.TestCase):
self.cli.migrate_to_unified_limits(
project_id=uuids.project, verbose=True)
# There should be 10 registered (default) limits now.
# There are 10 default limit values in the config options +
# 1 special case for PCPU which will be added based on VCPU = 11.
#
# There should be 11 registered (default) limits now.
expected_registered_limits = {
'servers': 5,
'class:VCPU': 10,
'class:PCPU': 10,
'class:MEMORY_MB': 4096,
'server_metadata_items': 64,
'server_injected_files': 3,
@ -2549,7 +2563,7 @@ class TestNovaManageLimits(test.TestCase):
}
registered_limits = self.ul_api.registered_limits()
self.assertEqual(10, len(registered_limits))
self.assertEqual(11, len(registered_limits))
for rl in registered_limits:
self.assertEqual(
expected_registered_limits[rl.resource_name], rl.default_limit)
@ -2581,7 +2595,7 @@ class TestNovaManageLimits(test.TestCase):
region_registered_limits = self.ul_api.registered_limits(
region_id=uuids.region)
self.assertEqual(10, len(region_registered_limits))
self.assertEqual(11, len(region_registered_limits))
for rl in region_registered_limits:
self.assertEqual(
expected_registered_limits[rl.resource_name], rl.default_limit)