From 5751824ca34b906573618a3cd6c5f78aa4096b8a Mon Sep 17 00:00:00 2001 From: Takashi NATSUME Date: Wed, 16 Jan 2019 14:52:35 +0900 Subject: [PATCH] Fix warnings in the document generation * Fix warnings when executing 'tox -e docs'. * Fix consistency for expressions of REST APIs. * Remove 'json' in some code-blocks because the contents include comments so they are not in accordance with JSON format. * Add the '-W' option in the sphinx-build command for the docs target. Change-Id: Iaaa4e2444030622539a61055e114a649466be497 --- .../approved/numa-aware-live-migration.rst | 23 ++++---- .../approved/show-server-numa-topology.rst | 56 +++++++++---------- ...r-extra-spec-image-property-validation.rst | 2 +- specs/stein/implemented/show-server-group.rst | 21 +++---- tox.ini | 2 +- 5 files changed, 51 insertions(+), 53 deletions(-) diff --git a/specs/stein/approved/numa-aware-live-migration.rst b/specs/stein/approved/numa-aware-live-migration.rst index c7208d80a..8e31047d1 100644 --- a/specs/stein/approved/numa-aware-live-migration.rst +++ b/specs/stein/approved/numa-aware-live-migration.rst @@ -97,10 +97,10 @@ Resource claims Let's address the resource claims aspect first. An effort has begun to support NUMA resource providers in placement [3]_ and to standardize CPU resource -tracking [8]_. However, placement can only track inventories and allocations of +tracking [4]_. However, placement can only track inventories and allocations of quantities of resources. It does not track which specific resources are used. Specificity is needed for NUMA live migration. Consider an instance that uses -4 dedicated CPUs in a future where the standard CPU resource tracking spec [8]_ +4 dedicated CPUs in a future where the standard CPU resource tracking spec [4]_ has been implemented. During live migration, the scheduler claims those 4 CPUs in placement on the destination. However, we need to prevent other instances from using those specific CPUs. Therefore, in addition to claiming quantities @@ -394,7 +394,7 @@ Primary assignee: Work Items ---------- -* Fail live migration of instances with NUMA topology [9]_ until this spec is +* Fail live migration of instances with NUMA topology [5]_ until this spec is fully implemented. * Add NUMA Nova objects * Add claim context to live migration @@ -411,13 +411,13 @@ Testing ======= The libvirt/qemu driver used in the gate does not currently support NUMA -features (though work is in progress [4]_). Therefore, testing NUMA aware +features (though work is in progress [6]_). Therefore, testing NUMA aware live migration in the upstream gate would require nested virt. In addition, the only assertable outcome of a NUMA live migration test (if it ever becomes possible) would be that the live migration succeeded. Examining the instance XML to assert things about its NUMA affinity or CPU pin mapping is explicitly out of tempest's scope. For these reasons, NUMA aware live migration is best -tested in third party CI [5]_ or other downstream test scenarios [6]_. +tested in third party CI [7]_ or other downstream test scenarios [8]_. Documentation Impact ==================== @@ -432,12 +432,13 @@ References .. [1] https://bugs.launchpad.net/nova/+bug/1496135 .. [2] https://bugs.launchpad.net/nova/+bug/1607996 .. [3] https://review.openstack.org/#/c/552924/ -.. [4] https://review.openstack.org/#/c/533077/ -.. [5] https://github.com/openstack/intel-nfv-ci-tests -.. [6] https://review.rdoproject.org/r/gitweb?p=openstack/whitebox-tempest-plugin.git -.. [7] https://review.openstack.org/#/c/244489/ -.. [8] https://review.openstack.org/#/c/555081/ -.. [9] https://review.openstack.org/#/c/611088/ +.. [4] https://review.openstack.org/#/c/555081/ +.. [5] https://review.openstack.org/#/c/611088/ +.. [6] https://review.openstack.org/#/c/533077/ +.. [7] https://github.com/openstack/intel-nfv-ci-tests +.. [8] https://review.rdoproject.org/r/gitweb?p=openstack/whitebox-tempest-plugin.git + +[9] https://review.openstack.org/#/c/244489/ History ======= diff --git a/specs/stein/approved/show-server-numa-topology.rst b/specs/stein/approved/show-server-numa-topology.rst index 1e3d2b318..b9689b36e 100644 --- a/specs/stein/approved/show-server-numa-topology.rst +++ b/specs/stein/approved/show-server-numa-topology.rst @@ -8,7 +8,7 @@ Show server numa topology ========================= -Add NUMA into new sub-resource``GET /servers/{server_id}/topology`` API. +Add NUMA into new sub-resource ``GET /servers/{server_id}/topology`` API. https://blueprints.launchpad.net/nova/+spec/show-server-numa-topology @@ -92,9 +92,7 @@ REST API impact API ``GET /servers/{server_id}/topology`` will show NUMA information with a new microversion. -The returned information for NUMA topology: - -.. code-block:: json +The returned information for NUMA topology:: { # overall policy: TOPOLOGY % 'index @@ -132,31 +130,33 @@ Security impact * Add new ``topology`` policy, admin only by default: -TOPOLOGY = 'os_compute_api:servers:topology:%s' + .. code-block:: python -server_topology_policies = [ - policy.DocumentedRuleDefault( - BASE_POLICY_NAME, - base.RULE_ADMIN_API, - "Show the topology data for a server", - [ - { - 'method': 'GET', - 'path': '/servers/{server_id}/topology' - } - ]), - policy.DocumentedRuleDefault( - # control host numa node and cpu pin information - TOPOLOGY % 'index:host_info', - base.RULE_ADMIN_API, - "List all servers with detailed information", - [ - { - 'method': 'GET', - 'path': '/servers/{server_id}/topology' - } - ]), -] + TOPOLOGY = 'os_compute_api:servers:topology:%s' + + server_topology_policies = [ + policy.DocumentedRuleDefault( + BASE_POLICY_NAME, + base.RULE_ADMIN_API, + "Show the topology data for a server", + [ + { + 'method': 'GET', + 'path': '/servers/{server_id}/topology' + } + ]), + policy.DocumentedRuleDefault( + # control host numa node and cpu pin information + TOPOLOGY % 'index:host_info', + base.RULE_ADMIN_API, + "List all servers with detailed information", + [ + { + 'method': 'GET', + 'path': '/servers/{server_id}/topology' + } + ]), + ] Notifications impact diff --git a/specs/stein/implemented/flavor-extra-spec-image-property-validation.rst b/specs/stein/implemented/flavor-extra-spec-image-property-validation.rst index be73e24b4..00bd7779a 100644 --- a/specs/stein/implemented/flavor-extra-spec-image-property-validation.rst +++ b/specs/stein/implemented/flavor-extra-spec-image-property-validation.rst @@ -51,7 +51,7 @@ Examples of validations to be added [1]_: * Validate the realtime mask. * Validate the number of serial ports. * Validate the cpu topology constraints. -* Validate the ``quota:*``settings (that are not virt driver specific) in the +* Validate the ``quota:*`` settings (that are not virt driver specific) in the flavor. Alternatives diff --git a/specs/stein/implemented/show-server-group.rst b/specs/stein/implemented/show-server-group.rst index 9acfe1c10..9ed2406c9 100644 --- a/specs/stein/implemented/show-server-group.rst +++ b/specs/stein/implemented/show-server-group.rst @@ -17,7 +17,7 @@ Problem description Currently you had to loop over all groups to find the group the server belongs to. This spec tries to address this by proposing showing the server -group information in API `GET /servers/{server_id}`. +group information in API ``GET /servers/{server_id}``. Use Cases --------- @@ -41,11 +41,11 @@ needs another DB query. Alternatives ------------ -* One alternative is support the server groups filter by server UUID. Like - "GET /os-server-groups?server=". +* One alternative is support the server groups filter by server UUID. Like + ``GET /os-server-groups?server=``. * Another alternative to support the server group query is following API: - "GET /servers/{server_id}/server_groups". + ``GET /servers/{server_id}/server_groups``. Data model impact ----------------- @@ -57,11 +57,9 @@ REST API impact --------------- -Allows the `GET /servers/{server_id}` API to show server group's UUID. -"PUT /servers/{server_id}" and REBUILD API "POST /servers/{server_id}/action" -also response same information. - -.. highlight:: json +Allows the ``GET /servers/{server_id}`` API to show server group's UUID. +``PUT /servers/{server_id}`` and REBUILD API +``POST /servers/{server_id}/action`` also response same information. The returned information for server group:: @@ -94,7 +92,7 @@ Performance Impact ------------------ * Need another DB query retrieve the server group UUID. To reduce the - perfermance impact for batch API call, "GET /servers/detail" won't + perfermance impact for batch API call, ``GET /servers/detail`` won't return server group information. Other deployer impact @@ -147,7 +145,7 @@ Documentation Impact References ========== -* Stein PTG discussion:https://etherpad.openstack.org/p/nova-ptg-stein +* Stein PTG discussion: https://etherpad.openstack.org/p/nova-ptg-stein History @@ -158,7 +156,6 @@ History * - Release Name - Version - * - Stein - First Version diff --git a/tox.ini b/tox.ini index ac424562d..2edbb2dc2 100644 --- a/tox.ini +++ b/tox.ini @@ -14,7 +14,7 @@ whitelist_externals = find commands = {posargs} [testenv:docs] -commands = sphinx-build -b html doc/source doc/build/html +commands = sphinx-build -W -b html doc/source doc/build/html [testenv:pep8] deps =