fix sphinx-lint errors in docs and add ci

This change mainly fixes incorrect use of backticks
but also adress some other minor issues like unbalanced
backticks, incorrect spacing or missing _ in links.

This change add a tox target to run sphinx-lint
as well as adding it to the relevent tox envs to enforce
it in ci. pre-commit is leveraged to install and execute
sphinx-lint but it does not reqiure you to install the
hooks locally into your working dir.

Change-Id: Ib97b35c9014bc31876003cef4362c47a8a3a4e0e
This commit is contained in:
Sean Mooney 2023-10-02 22:55:01 +01:00
parent c199becf52
commit 33a56781f4
31 changed files with 80 additions and 66 deletions

View File

@ -75,3 +75,10 @@ repos:
| nova/virt/libvirt/host.py
| nova/virt/libvirt/utils.py
)
- repo: https://github.com/sphinx-contrib/sphinx-lint
rev: v0.6.8
hooks:
- id: sphinx-lint
args: [--enable=default-role]
files: ^doc/|releasenotes|api-guide
types: [rst]

View File

@ -559,7 +559,7 @@ URL (using ``rabbit://bob:s3kret@myhost:123/nova?sync=true#extra``):
- Meaning
- Part of example URL
* - ``scheme``
- The part before the `://`
- The part before the ``://``
- ``rabbit``
* - ``username``
- The username part of the credentials

View File

@ -12,7 +12,7 @@ configuration file were used.
The 2023.1 and later compute node identification file must remain unchanged
during the lifecycle of the compute node. Changing the value or removing the
file will result in a failure to start and may require advanced techniques
for recovery. The file is read once at `nova-compute`` startup, at which point
for recovery. The file is read once at ``nova-compute`` startup, at which point
it is validated for formatting and the corresponding node is located or
created in the database.
@ -27,7 +27,7 @@ as the (UUID-based) identity and mapping of compute nodes to compute manager
service hosts is dynamic. In that case, no single node identity is maintained
by the compute host and thus no identity file is read or written. Thus none
of the sections below apply to hosts with :oslo.config:option:`compute_driver`
set to `ironic`.
set to ``ironic``.
Self-provisioning of the node identity
--------------------------------------

View File

@ -928,7 +928,7 @@ cached images are stored.
have a shared file system.
You can automatically purge unused images after a specified period of time. To
configure this action, set these options in the :oslo.config:group`image_cache`
configure this action, set these options in the :oslo.config:group:`image_cache`
section in the ``nova.conf`` file:
* :oslo.config:option:`image_cache.remove_unused_base_images`

View File

@ -142,7 +142,7 @@ If :oslo.config:option:`cpu_mode=none <libvirt.cpu_mode>`, libvirt does not
specify a CPU model. Instead, the hypervisor chooses the default model.
The ``none`` CPU model is the default for all non-KVM/QEMU hypervisors.
(:oslo.config:option:`libvirt.virt_type`\ !=``kvm``/``qemu``)
(:oslo.config:option:`libvirt.virt_type`\ != ``kvm`` / ``qemu``)
.. _cpu-models:

View File

@ -764,7 +764,7 @@ tuned. That being said, we also provide a way to automatically change the
governors on the fly, as explained below.
.. important::
Some OS platforms don't support `cpufreq` resources in sysfs, so the
Some OS platforms don't support ``cpufreq`` resources in sysfs, so the
``governor`` strategy could be not available. Please verify if your OS
supports scaling govenors before modifying the configuration option.

View File

@ -110,4 +110,4 @@ instances up and running.
If the instance task state is not None, evacuation will be possible. However,
depending on the ongoing operation, there may be clean up required in other
services which the instance was using, such as neutron, cinder, glance, or
the storage backend.
the storage backend.

View File

@ -153,11 +153,11 @@ realtime guests but can also be enabled explicitly using the
``hw:locked_memory`` extra spec (or use ``hw_locked_memory`` image property).
``hw:locked_memory`` (also ``hw_locked_memory`` image property) accept
boolean values in string format like 'true' or 'false' value.
It will raise `FlavorImageLockedMemoryConflict` exception if both flavor and
It will raise ``FlavorImageLockedMemoryConflict`` exception if both flavor and
image property are specified but with different boolean values.
This will only be allowed if you have also set ``hw:mem_page_size``,
so we can ensure that the scheduler can actually account for this correctly
and prevent out of memory events. Otherwise, will raise `LockMemoryForbidden`
and prevent out of memory events. Otherwise, will raise ``LockMemoryForbidden``
exception.
.. code:: console

View File

@ -230,7 +230,7 @@ taken against a VM after
2. ``force_complete``: The compute service will either pause the VM or trigger
post-copy depending on if post copy is enabled and available
(:oslo.config:option:`libvirt.live_migration_permit_post_copy` is set to
`True`). This is similar to using API
``True``). This is similar to using API
``POST /servers/{server_id}/migrations/{migration_id}/action (force_complete)``.
You can also read the

View File

@ -173,26 +173,26 @@ Admins may also refresh an existing volume attachment using the following
include orchestrating the shutdown of an instance and refreshing volume
attachments among other things.
To begin the admin can use the `volume_attachment show` subcommand to dump
To begin the admin can use the ``volume_attachment show`` subcommand to dump
existing details of the attachment directly from the Nova database. This
includes the stashed `connection_info` not shared by the API.
includes the stashed ``connection_info`` not shared by the API.
.. code-block:: shell
$ nova-manage volume_attachment show 216f9481-4c9d-4530-b865-51cedfa4b8e7 8b9b3491-f083-4485-8374-258372f3db35 --json | jq .attachment_id
"d338fb38-cfd5-461f-8753-145dcbdb6c78"
If the stored `connection_info` or `attachment_id` are incorrect then the
If the stored ``connection_info`` or ``attachment_id`` are incorrect then the
admin may want to refresh the attachment to the compute host entirely by
recreating the Cinder volume attachment record(s) and pulling down fresh
`connection_info`. To do this we first need to ensure the instance is stopped:
``connection_info``. To do this we first need to ensure the instance is stopped:
.. code-block:: shell
$ openstack server stop 216f9481-4c9d-4530-b865-51cedfa4b8e7
Once stopped the host connector of the compute hosting the instance has to be
fetched using the `volume_attachment get_connector` subcommand:
fetched using the ``volume_attachment get_connector`` subcommand:
.. code-block:: shell
@ -204,7 +204,7 @@ fetched using the `volume_attachment get_connector` subcommand:
the host connector into the main refresh command. Unfortunately until then
it must remain a separate manual step.
We can then provide this connector to the `volume_attachment refresh`
We can then provide this connector to the ``volume_attachment refresh``
subcommand. This command will connect to the compute, disconnect any host
volume connections, delete the existing Cinder volume attachment,
recreate the volume attachment and finally update Nova's database.

View File

@ -220,7 +220,7 @@ After power resumes and all hardware components restart:
stop at the plymouth stage. This is expected behavior. DO NOT reboot a
second time.
Instance state at this stage depends on whether you added an `/etc/fstab`
Instance state at this stage depends on whether you added an ``/etc/fstab``
entry for that volume. Images built with the cloud-init package remain in
a ``pending`` state, while others skip the missing volume and start. You
perform this step to ask Compute to reboot every instance so that the

View File

@ -86,7 +86,7 @@ are made in order:
.. note::
The API sets the limit in the ``quota_classes`` table. Once a default limit
is set via the `default` quota class, that takes precedence over any
is set via the ``default`` quota class, that takes precedence over any
changes to that resource limit in the configuration options. In other
words, once you've changed things via the API, you either have to keep
those synchronized with the configuration values or remove the default

View File

@ -269,7 +269,7 @@ Refer to :doc:`/admin/aggregates` for more information.
Filters host by disk allocation with a per-aggregate ``max_io_ops_per_host``
value. If the per-aggregate value is not found, the value falls back to the
global setting defined by the
`:oslo.config:option:`filter_scheduler.max_io_ops_per_host` config option.
:oslo.config:option:`filter_scheduler.max_io_ops_per_host` config option.
If the host is in more than one aggregate and more than one value is found, the
minimum value will be used.
@ -906,7 +906,7 @@ that are not and are not available using the
respectively.
Starting with the Stein release, if per-aggregate value with the key
`metrics_weight_multiplier` is found, this value would be chosen as the
``metrics_weight_multiplier`` is found, this value would be chosen as the
metrics weight multiplier. Otherwise, it will fall back to the
:oslo.config:option:`metrics.weight_multiplier`. If more than
one value is found for a host in aggregate metadata, the minimum value will
@ -1029,7 +1029,7 @@ value will be used.
Weighs hosts based on which cell they are in. "Local" cells are preferred when
moving an instance.
If per-aggregate value with the key `cross_cell_move_weight_multiplier` is
If per-aggregate value with the key ``cross_cell_move_weight_multiplier`` is
found, this value would be chosen as the cross-cell move weight multiplier.
Otherwise, it will fall back to the
:oslo.config:option:`filter_scheduler.cross_cell_move_weight_multiplier`. If
@ -1050,7 +1050,7 @@ hosts with different hypervisors.
For example, the ironic virt driver uses the ironic API micro-version as the hypervisor
version for a given node. The libvirt driver uses the libvirt version
i.e. Libvirt `7.1.123` becomes `700100123` vs Ironic `1.82` becomes `1`.
i.e. Libvirt ``7.1.123`` becomes ``700100123`` vs Ironic ``1.82`` becomes ``1``.
If you have a mixed virt driver deployment in the ironic vs non-ironic
case nothing special needs to be done. ironic nodes are scheduled using custom

View File

@ -206,7 +206,7 @@ Problem
You can view the log output of running instances from either the
:guilabel:`Log` tab of the dashboard or the output of :command:`nova
console-log`. In some cases, the log output of a running Linux instance will be
empty or only display a single character (for example, the `?` character).
empty or only display a single character (for example, the ``?`` character).
This occurs when the Compute service attempts to retrieve the log output of the
instance via a serial console while the instance itself is not configured to

View File

@ -51,7 +51,7 @@ same time.
As of OpenStack 2023.1 (Antelope), Nova supports the coexistence of N and
N-2 (Yoga) :program:`nova-compute` or :program:`nova-conductor` services in
the same deployment. The `nova-conductor`` service will fail to start when
the same deployment. The ``nova-conductor`` service will fail to start when
a ``nova-compute`` service that is older than the support envelope is
detected. This varies by release and the support envelope will be explained
in the release notes. Similarly, in a :doc:`deployment with multiple cells
@ -116,7 +116,7 @@ same time.
#. After maintenance window:
* Once all services are running the new code, double check in the DB that
there are no old orphaned service records using `nova service-list`.
there are no old orphaned service records using ``nova service-list``.
* Now that all services are upgraded, we need to send the SIG_HUP signal, so all
the services clear any cached service version data. When a new service

View File

@ -255,7 +255,7 @@ overridden in the policy.yaml file but scope is not override-able.
read-only operation within project. For example: Get server.
#. PROJECT_MEMBER_OR_ADMIN: ``admin`` or ``member`` role on ``project`` scope. Such policy rules are default to most of the owner level APIs and align
with `member` role legacy admin can continue to access those APIs.
with ``member`` role legacy admin can continue to access those APIs.
#. PROJECT_READER_OR_ADMIN: ``admin`` or ``reader`` role on ``project`` scope. Such policy rules are default to most of the read only APIs so that legacy
admin can continue to access those APIs.

View File

@ -40,7 +40,7 @@ Guidelines for when a feature doesn't need a spec.
When a blueprint does not require a spec it still needs to be
approved before the code which implements the blueprint is merged.
Specless blueprints are discussed and potentially approved during
the `Open Discussion` portion of the weekly `nova IRC meeting`_. See
the ``Open Discussion`` portion of the weekly `nova IRC meeting`_. See
`trivial specifications`_ for more details.
Project Priorities

View File

@ -177,7 +177,7 @@ function:
As a reminder, the hooks are optional and you are not enforced to run them.
You can either not install pre-commit or skip the hooks once by using the
`--no-verify` flag on `git commit`.
``--no-verify`` flag on ``git commit``.
Using a remote debugger
=======================

View File

@ -47,7 +47,7 @@ every stable release (e.g. ``pike``).
unimproved as we address content in ``latest``.
The ``api-ref`` and ``api-guide`` publish only from master to a single site on
`docs.openstack.org`. As such, they are effectively branchless.
``docs.openstack.org``. As such, they are effectively branchless.
Guidelines for consumable docs
==============================

View File

@ -380,7 +380,7 @@ necessary to add changes to other places which describe your change:
env ``api-samples`` or run test with env var ``GENERATE_SAMPLES`` True.
* Update the `API Reference`_ documentation as appropriate. The source is
located under `api-ref/source/`.
located under ``api-ref/source/``.
* If the microversion changes servers related APIs, update the
``api-guide/source/server_concepts.rst`` accordingly.

View File

@ -284,7 +284,7 @@ And now back on the physical host edit the guest config as root:
$ sudo virsh edit f29x86_64
The first thing is to change the `<cpu>` block to do passthrough of the host
The first thing is to change the ``<cpu>`` block to do passthrough of the host
CPU. In particular this exposes the "SVM" or "VMX" feature bits to the guest so
that "Nested KVM" can work. At the same time we want to define the NUMA
topology of the guest. To make things interesting we're going to give the guest
@ -547,7 +547,7 @@ Testing instance boot with no NUMA topology requested
For the sake of backwards compatibility, if the NUMA filter is enabled, but the
flavor/image does not have any NUMA settings requested, it should be assumed
that the guest will have a single NUMA node. The guest should be locked to a
single host NUMA node too. Boot a guest with the `m1.tiny` flavor to test this
single host NUMA node too. Boot a guest with the ``m1.tiny`` flavor to test this
condition:
.. code-block:: bash

View File

@ -26,7 +26,7 @@ Starting a new instance.
# cd devstack && . openrc
# nova boot --flavor 1 --image cirros-0.3.2-x86_64-uec cirros1
Nova provides a command `nova get-serial-console` which will returns a
Nova provides a command ``nova get-serial-console`` which will returns a
URL with a valid token to connect to the serial console of VMs.
.. code-block:: bash

View File

@ -32,4 +32,4 @@ itself. This final ``connect_volume`` lock also being held when detaching and
disconnecting a volume from the host by ``os-brick``.
.. image:: /_static/images/attach_volume.svg
:alt: Attach volume workflow
:alt: Attach volume workflow

View File

@ -172,7 +172,7 @@ broken into ``compute:foobar:create``, ``compute:foobar:update``,
``compute:foobar:list``, ``compute:foobar:get``, and ``compute:foobar:delete``.
Breaking policies down this way allows us to set read-only policies for
readable operations or use another default role for creation and management of
`foobar` resources. The oslo.policy library has `examples`_ that show how to do
``foobar`` resources. The oslo.policy library has `examples`_ that show how to do
this using deprecated policy rules.
.. _examples: https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#oslo_policy.policy.DeprecatedRule

View File

@ -77,7 +77,7 @@ As the above diagram illustrates, scheduling works like so:
from the ranked list that are in the same cell as the selected host, which
can be used by the cell conductor in the event that the build on the
selected host fails for some reason. The number of alternates is determined
by the configuration option `scheduler.max_attempts`.
by the configuration option ``scheduler.max_attempts``.
#. Scheduler creates two list structures for each requested instance: one for
the hosts (selected + alternates), and the other for their matching

View File

@ -50,6 +50,6 @@ Implementation-Specific Drivers
A manager will generally load a driver for some of its tasks. The driver is responsible for specific implementation details. Anything running shell commands on a host, or dealing with other non-python code should probably be happening in a driver.
Drivers should not touch the database as the database management is done inside `nova-conductor`.
Drivers should not touch the database as the database management is done inside ``nova-conductor``.
It usually makes sense to define an Abstract Base Class for the specific driver (i.e. VolumeDriver), to define the methods that a different driver would need to implement.

View File

@ -40,7 +40,7 @@ When we talk about block device mapping, we usually refer to one of two things
format as the 'API BDMs' from now on.
2.2 The virt driver format - this is the format defined by the classes in
:mod: `nova.virt.block_device`. This format is used and expected by the code
:mod:`nova.virt.block_device`. This format is used and expected by the code
in the various virt drivers. These classes, in addition to exposing a
different format (mimicking the Python dict interface), also provide a place
to bundle some functionality common to certain types of block devices (for
@ -66,8 +66,8 @@ mirrored that of the EC2 API. During the Havana release of Nova, block device
handling code, and in turn the block device mapping structure, had work done on
improving the generality and usefulness. These improvements included exposing
additional details and features in the API. In order to facilitate this, a new
extension was added to the v2 API called `BlockDeviceMappingV2Boot` [2]_, that
added an additional `block_device_mapping_v2` field to the instance boot API
extension was added to the v2 API called ``BlockDeviceMappingV2Boot`` [2]_, that
added an additional ``block_device_mapping_v2`` field to the instance boot API
request.
Block device mapping v1 (aka legacy)
@ -82,14 +82,14 @@ this page), and would accept only:
* Type field - used only to distinguish between volumes and Cinder volume
snapshots
* Optional size field
* Optional `delete_on_termination` flag
* Optional ``delete_on_termination`` flag
While all of Nova internal code only uses and stores the new data structure, we
still need to handle API requests that use the legacy format. This is handled
by the Nova API service on every request. As we will see later, since block
device mapping information can also be stored in the image metadata in Glance,
this is another place where we need to handle the v1 format. The code to handle
legacy conversions is part of the :mod: `nova.block_device` module.
legacy conversions is part of the :mod:`nova.block_device` module.
Intermezzo - problem with device names
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -110,7 +110,7 @@ for the features mentioned above (and preferably only then).
Another use for specifying the device name was to allow the "boot from volume"
functionality, by specifying a device name that matches the root device name
for the instance (usually `/dev/vda`).
for the instance (usually ``/dev/vda``).
Currently (mid Liberty) users are discouraged from specifying device names
for all calls requiring or allowing block device mapping, except when trying to
@ -131,19 +131,19 @@ fields (in addition to the ones that were already there):
* source_type - this can have one of the following values:
* `image`
* `volume`
* `snapshot`
* `blank`
* ``image``
* ``volume``
* ``snapshot``
* ``blank``
* destination_type - this can have one of the following values:
* `local`
* `volume`
* ``local``
* ``volume``
* guest_format - Tells Nova how/if to format the device prior to attaching,
should be only used with blank local images. Denotes a swap disk if the value
is `swap`.
is ``swap``.
* device_name - See the previous section for a more in depth explanation of
this - currently best left empty (not specified that is), unless the user
@ -153,8 +153,8 @@ fields (in addition to the ones that were already there):
get changed by the driver.
* disk_bus and device_type - low level details that some hypervisors (currently
only libvirt) may support. Some example disk_bus values can be: `ide`, `usb`,
`virtio`, `scsi`, while device_type may be `disk`, `cdrom`, `floppy`, `lun`.
only libvirt) may support. Some example disk_bus values can be: ``ide``, ``usb``,
``virtio``, ``scsi``, while device_type may be ``disk``, ``cdrom``, ``floppy``, ``lun``.
This is not an exhaustive list as it depends on the virtualization driver,
and may change as more support is added. Leaving these empty is the most
common thing to do.
@ -185,28 +185,28 @@ Combination of the ``source_type`` and ``destination_type`` will define the
kind of block device the entry is referring to. The following
combinations are supported:
* `image` -> `local` - this is only currently reserved for the entry
* ``image`` -> ``local`` - this is only currently reserved for the entry
referring to the Glance image that the instance is being booted with
(it should also be marked as a boot device). It is also worth noting
that an API request that specifies this, also has to provide the
same Glance uuid as the `image_ref` parameter to the boot request
same Glance uuid as the ``image_ref`` parameter to the boot request
(this is done for backwards compatibility and may be changed in the
future). This functionality might be extended to specify additional
Glance images to be attached to an instance after boot (similar to
kernel/ramdisk images) but this functionality is not supported by
any of the current drivers.
* `volume` -> `volume` - this is just a Cinder volume to be attached to the
* ``volume`` -> ``volume`` - this is just a Cinder volume to be attached to the
instance. It can be marked as a boot device.
* `snapshot` -> `volume` - this works exactly as passing `type=snap` does.
* ``snapshot`` -> ``volume`` - this works exactly as passing ``type=snap`` does.
It would create a volume from a Cinder volume snapshot and attach that
volume to the instance. Can be marked bootable.
* `image` -> `volume` - As one would imagine, this would download a Glance
* ``image`` -> ``volume`` - As one would imagine, this would download a Glance
image to a cinder volume and attach it to an instance. Can also be marked
as bootable. This is really only a shortcut for creating a volume out of
an image before booting an instance with the newly created volume.
* `blank` -> `volume` - Creates a blank Cinder volume and attaches it. This
* ``blank`` -> ``volume`` - Creates a blank Cinder volume and attaches it. This
will also require the volume size to be set.
* `blank` -> `local` - Depending on the guest_format field (see below),
* ``blank`` -> ``local`` - Depending on the guest_format field (see below),
this will either mean an ephemeral blank disk on hypervisor local
storage, or a swap disk (instances can have only one of those).
@ -216,13 +216,13 @@ will do basic validation to make sure that the requested block device
mapping is valid before accepting a boot request.
.. [1] In addition to the BlockDeviceMapping Nova object, we also have the
BlockDeviceDict class in :mod: `nova.block_device` module. This class
BlockDeviceDict class in :mod:`nova.block_device` module. This class
handles transforming and validating the API BDM format.
.. [2] This work predates API microversions and thus the only way to add it was
by means of an API extension.
.. [3] This is a feature that the EC2 API offers as well and has been in Nova
for a long time, although it has been broken in several releases. More info
can be found on `this bug <https://launchpad.net/bugs/1370250>`
can be found on `this bug <https://launchpad.net/bugs/1370250>`_
FAQs

View File

@ -451,9 +451,9 @@ certificate as ``cert_client.pem``.
Upload the generated certificates to the key manager
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In order interact with the key manager, the user needs to have a `creator` role.
In order interact with the key manager, the user needs to have a ``creator`` role.
To list all users with a `creator` role, run the following command as an admin:
To list all users with a ``creator`` role, run the following command as an admin:
.. code-block:: console
@ -467,7 +467,7 @@ To list all users with a `creator` role, run the following command as an admin:
| creator | project_a_creator@Default | | project_a@Default | | False |
+---------+-----------------------------+-------+-------------------+--------+-----------+
To give the `demo` user a `creator` role in the `demo` project, run the
To give the ``demo`` user a ``creator`` role in the ``demo`` project, run the
following command as an admin:
.. code-block:: console

View File

@ -25,7 +25,7 @@ Flavor ID
Name
Name for the new flavor. This property is required.
Historically, names were given a format `XX.SIZE_NAME`. These are typically
Historically, names were given a format ``XX.SIZE_NAME``. These are typically
not required, though some third party tools may rely on it.
VCPUs

View File

@ -304,7 +304,7 @@ encryption.
.. note::
A bootable encrypted volume can also be created by adding the
`--type ENCRYPTED_VOLUME_TYPE` parameter to the volume create command.
``--type ENCRYPTED_VOLUME_TYPE`` parameter to the volume create command.
For example:
.. code-block:: console

View File

@ -131,6 +131,13 @@ deps =
commands =
pre-commit run --all-files --show-diff-on-failure codespell
[testenv:sphinx-lint]
description =
Run sphinx lint checks.
deps = pre-commit
commands =
pre-commit run --all-files --show-diff-on-failure sphinx-lint
[testenv:fast8]
description =
Run style checks on the changes made since HEAD~. For a full run including docs, use 'pep8'