[codespell] Fix spelling issues in IPA

This fixes several spelling issues identified by codepsell. In some
cases, I may have manually modified a line to make the output more clear
or to correct grammatical issues which were obvious in the codespell
output.

Later changes in this chain will provide the codespell config used to
generate this, as well as adding this commit's SHA, once landed, to a
.git-blame-ignore-revs file to ensure it will not pollute git historys
for modern clients.

Related-Bug: 2047654
Change-Id: I240cf8484865c9b748ceb51f3c7b9fd973cb5ada
This commit is contained in:
Jay Faulkner 2023-12-28 10:39:16 -08:00
parent 03b6b0a4ab
commit 36e5993a04
29 changed files with 63 additions and 63 deletions

View File

@ -147,7 +147,7 @@ to be used.
.. NOTE::
The Ironic Developers chose to limit the items available for service steps
such that the risk of data distruction is generally minimized.
such that the risk of data destruction is generally minimized.
That being said, it could be reasonable to reconfigure RAID devices through
local hardware managers *or* to write the base OS image as part of a
service operation. As such, caution should be taken, and if additional data
@ -160,7 +160,7 @@ Cleaning safeguards
-------------------
The stock hardware manager contains a number of safeguards to prevent
unsafe conditions from occuring.
unsafe conditions from occurring.
Devices Skip List
~~~~~~~~~~~~~~~~~

View File

@ -245,7 +245,7 @@ We also support determining the checksum by length.
The field can be utilized to designate:
* A URL to retreive a checksum from.
* A URL to retrieve a checksum from.
* MD5 (Disabled by default, see ``[DEFAULT]md5_enabled`` in the agent
configuration file.)
* SHA-2 based SHA256

View File

@ -169,7 +169,8 @@ Cleaning halted with ProtectedDeviceError
The IPA service has halted cleaning as one of the block devices within or
attached to the bare metal node contains a class of filesystem which **MAY**
cause irreparable harm to a potentially running cluster if accidently removed.
cause irreparable harm to a potentially running cluster if accidentally
removed.
These filesystems *may* be used for only local storage and as a result be
safe to erase. However if a shared block device is in use, such as a device

View File

@ -39,7 +39,7 @@ Upon success, it returns following data in response::
If successful, this synchronous command will:
1. Write the salted and crypted ``rescue_password`` to
1. Write the salted and encrypted ``rescue_password`` to
``/etc/ipa-rescue-config/ipa-rescue-password`` in the chroot or filesystem
that ironic-python-agent is running in.

View File

@ -22,7 +22,7 @@ LOG = log.getLogger()
# All the helper methods should be kept outside of the HardwareManager
# so they'll never get accidentally called by dispatch_to_managers()
def _initialize_hardware():
"""Example method for initalizing hardware."""
"""Example method for initializing hardware."""
# Perform any operations here that are required to initialize your
# hardware.
LOG.debug('Loading drivers, settling udevs, and generally initalizing')
@ -64,7 +64,7 @@ class ExampleDeviceHardwareManager(hardware.HardwareManager):
"""Declare level of hardware support provided.
Since this example covers a case of supporting a specific device,
this method is where you would do anything needed to initalize that
this method is where you would do anything needed to initialize that
device, including loading drivers, and then detect if one exists.
In some cases, if you expect the hardware to be available on any node

View File

@ -350,7 +350,7 @@ class ClockSyncError(RESTError):
class HeartbeatConnectionError(IronicAPIError):
"""Transitory connection failure occured attempting to contact the API."""
"""Transitory connection failure occurred attempting to contact the API."""
message = ("Error attempting to heartbeat - Possible transitory network "
"failure or blocking port may be present.")

View File

@ -137,7 +137,7 @@ def _mount_partition(partition, path):
# NOTE(TheJulia): It seems in some cases,
# the python os.path.ismount can return False
# even *if* it is actually mounted. This appears
# to be becasue it tries to rely on inode on device
# to be because it tries to rely on inode on device
# logic, yet the rules are sometimes different inside
# ramdisks. So lets check the error first.
if 'already mounted' not in e:
@ -436,13 +436,12 @@ def _try_preserve_efi_assets(device, path,
efi_partition_mount_point):
"""Attempt to preserve UEFI boot assets.
:param device: The device upon which wich to try to preserve
assets.
:param device: The device upon which to try to preserve assets.
:param path: The path in which the filesystem is already mounted
which we should examine to preserve assets from.
:param efi_system_part_uuid: The partition ID representing the
created EFI system partition.
:param efi_partition: The partitions upon wich to write the preserved
:param efi_partition: The partitions upon which to write the preserved
assets to.
:param efi_partition_mount_point: The folder at which to mount
the assets for the process of
@ -450,7 +449,7 @@ def _try_preserve_efi_assets(device, path,
:returns: True if assets have been preserved, otherwise False.
None is the result of this method if a failure has
occured.
occurred.
"""
efi_assets_folder = efi_partition_mount_point + '/EFI'
if os.path.exists(efi_assets_folder):
@ -459,7 +458,7 @@ def _try_preserve_efi_assets(device, path,
# True from _preserve_efi_assets to correspond with success or
# failure in this action.
# NOTE(TheJulia): Still makes sense to invoke grub-install as
# fragmentation of grub has occured.
# fragmentation of grub has occurred.
if (os.path.exists(os.path.join(path, 'usr/sbin/grub2-install'))
or os.path.exists(os.path.join(path, 'usr/sbin/grub-install'))):
_configure_grub(device, path)
@ -592,7 +591,7 @@ def _preserve_efi_assets(path, efi_assets_folder, efi_partition,
:param efi_assets_folder: The folder where we can find the
UEFI assets required for booting.
:param efi_partition: The partition upon which to write the
perserved assets to.
preserved assets to.
:param efi_partition_mount_point: The folder at which to mount
the assets for the process of
preservation.
@ -629,7 +628,7 @@ def _preserve_efi_assets(path, efi_assets_folder, efi_partition,
# NOTE(TheJulia): By saving the default, this file should be created.
# this appears to what diskimage-builder does.
# if the file is just a file, then we'll need to copy it. If it is
# anything else like a link, we're good. This behaivor is inconsistent
# anything else like a link, we're good. This behavior is inconsistent
# depending on packager install scripts for grub.
if os.path.isfile(grub2_env_file):
LOG.debug('Detected grub environment file %s, will attempt '
@ -685,7 +684,7 @@ class ImageExtension(base.BaseAgentExtension):
Used only for booting ppc64* partition images locally. In this
scenario the bootloader will be installed here.
:param target_boot_mode: bios, uefi. Only taken into account
for softraid, when no efi partition is explicitely provided
for softraid, when no efi partition is explicitly provided
(happens for whole disk images)
:raises: CommandExecutionError if the installation of the
bootloader fails.

View File

@ -415,7 +415,7 @@ def _get_algorithm_by_length(checksum):
:param checksum: The requested checksum.
:returns: A hashlib object based upon the checksum
or ValueError if the algorthm could not be
or ValueError if the algorithm could not be
identified.
"""
# NOTE(TheJulia): This is all based on SHA-2 lengths.
@ -462,7 +462,7 @@ class ImageDownload(object):
def __init__(self, image_info, time_obj=None):
"""Initialize an instance of the ImageDownload class.
Trys each URL in image_info successively until a URL returns a
Tries each URL in image_info successively until a URL returns a
successful request code. Once the object is initialized, the user may
retrieve chunks of the image through the standard python iterator
interface until either the image is fully downloaded, or an error is
@ -500,7 +500,7 @@ class ImageDownload(object):
image_info)
retrieved_checksum = True
if not algo:
# Override algorithm not suppied as os_hash_algo
# Override algorithm not supplied as os_hash_algo
self._hash_algo = _get_algorithm_by_length(
self._expected_hash_value)
elif checksum:
@ -510,7 +510,7 @@ class ImageDownload(object):
if not new_algo:
# Realistically, this should never happen, but for
# compatability...
# compatibility...
# TODO(TheJulia): Remove for a 2024 release.
self._hash_algo = hashlib.new('md5') # nosec
else:
@ -676,7 +676,7 @@ def _download_image(image_info):
totaltime = time.time() - starttime
LOG.info("Image downloaded from %(image_location)s "
"in %(totaltime)s seconds. Transferred %(size)s bytes. "
"Server originaly reported: %(reported)s.",
"Server originally reported: %(reported)s.",
{'image_location': image_location,
'totaltime': totaltime,
'size': image_download.bytes_transferred,
@ -854,7 +854,7 @@ class StandbyExtension(base.BaseAgentExtension):
totaltime = time.time() - starttime
LOG.info("Image streamed onto device %(device)s in %(totaltime)s "
"seconds for %(size)s bytes. Server originaly reported "
"seconds for %(size)s bytes. Server originally reported "
"%(reported)s.",
{'device': device, 'totaltime': totaltime,
'size': image_download.bytes_transferred,

View File

@ -703,7 +703,7 @@ def list_all_block_devices(block_type='disk',
def save_api_client(client=None, timeout=None, interval=None):
"""Preserves access to the API client for potential later re-use."""
"""Preserves access to the API client for potential later reuse."""
global API_CLIENT, API_LOOKUP_TIMEOUT, API_LOOKUP_INTERVAL
if client and timeout and interval and not API_CLIENT:
@ -713,7 +713,7 @@ def save_api_client(client=None, timeout=None, interval=None):
def update_cached_node():
"""Attmepts to update the node cache via the API"""
"""Attempts to update the node cache via the API"""
cached_node = get_cached_node()
if API_CLIENT:
LOG.info('Agent is requesting to perform an explicit node cache '
@ -1165,7 +1165,7 @@ class HardwareManager(object, metaclass=abc.ABCMeta):
'interface': the name of the driver interface that should execute
the step.
'step': the HardwareManager function to call.
'priority': the order steps will be run in if excuted upon
'priority': the order steps will be run in if executed upon
similar to automated cleaning or deployment.
In service steps, the order comes from the user request,
but this similarity is kept for consistency should we
@ -2296,7 +2296,7 @@ class GenericHardwareManager(HardwareManager):
:return: IPv6 address of lan channel or ::/0 in case none of them is
configured properly. May return None value if it cannot
interract with system tools or critical error occurs.
interact with system tools or critical error occurs.
"""
null_address_re = re.compile(r'^::(/\d{1,3})*$')
@ -2313,9 +2313,9 @@ class GenericHardwareManager(HardwareManager):
# dynamic_addr and static_addr commands is a valid yaml.
try:
out = yaml.safe_load(out.strip())
except yaml.YAMLError as excpt:
except yaml.YAMLError as ex:
LOG.warning('Cannot process output of "%(cmd)s" '
'command: %(e)s', {'cmd': cmd, 'e': excpt})
'command: %(e)s', {'cmd': cmd, 'e': ex})
return
for addr_dict in out.values():
@ -2498,8 +2498,8 @@ class GenericHardwareManager(HardwareManager):
'reboot_requested': False,
'abortable': True
},
# NOTE(TheJulia): Burnin disk is explicilty not carried in this
# list because it would be distructive to data on a disk.
# NOTE(TheJulia): Burnin disk is explicitly not carried in this
# list because it would be destructive to data on a disk.
# If someone needs to do that, the machine should be
# unprovisioned.
{
@ -3420,16 +3420,16 @@ def get_multipath_status():
def safety_check_block_device(node, device):
"""Performs safety checking of a block device before destroying.
In order to guard against distruction of file systems such as
In order to guard against destruction of file systems such as
shared-disk file systems
(https://en.wikipedia.org/wiki/Clustered_file_system#SHARED-DISK)
or similar filesystems where multiple distinct computers may have
unlocked concurrent IO access to the entire block device or
SAN Logical Unit Number, we need to evaluate, and block cleaning
from occuring on these filesystems *unless* we have been explicitly
from occurring on these filesystems *unless* we have been explicitly
configured to do so.
This is because cleaning is an intentionally distructive operation,
This is because cleaning is an intentionally destructive operation,
and once started against such a device, given the complexities of
shared disk clustered filesystems where concurrent access is a design
element, in all likelihood the entire cluster can be negatively
@ -3487,7 +3487,7 @@ def safety_check_block_device(node, device):
def _check_for_special_partitions_filesystems(device, ids, fs_types):
"""Compare supplied IDs, Types to known items, and raise if found.
:param device: The block device in use, specificially for logging.
:param device: The block device in use, specifically for logging.
:param ids: A list above IDs found to check.
:param fs_types: A list of FS types found to check.
:raises: ProtectedDeviceError should a partition label or metadata

View File

@ -101,7 +101,7 @@ class IronicInspection(threading.Thread):
self.max_delay)
except Exception as e:
# General failure such as requests ConnectionError
LOG.error('Error occured attempting to connect to '
LOG.error('Error occurred attempting to connect to '
'connect to the introspection service. '
'Error: %(err)s',
{'err': e})

View File

@ -231,10 +231,10 @@ class APIClient(object):
msg = ('Unhandled error looking up node with addresses {} at '
'{}: {}'.format(params['addresses'], self.api_url, err))
# No matter what we do at this point, IPA is going to exit.
# This is because we don't know why the exception occured and
# This is because we don't know why the exception occurred and
# we likely should not try to retry as such.
# We will attempt to provide as much detail to the logs as
# possible as to what occured, although depending on the logging
# possible as to what occurred, although depending on the logging
# subsystem, additional errors can occur, thus the additional
# handling below.
try:
@ -242,7 +242,7 @@ class APIClient(object):
return False
except Exception as exc_err:
LOG.error(msg)
exc_msg = ('Unexpected exception occured while trying to '
exc_msg = ('Unexpected exception occurred while trying to '
'log additional detail. Error: {}'.format(exc_err))
LOG.error(exc_msg)
raise errors.LookupNodeError(msg)

View File

@ -562,7 +562,7 @@ def _try_build_fat32_config_drive(partition, confdrive_file):
# config drive, nor could we recover the state.
LOG.error('We were unable to make a new filesystem for the '
'configuration drive. Error: %s', e)
msg = ('A failure occured while attempting to format, copy, and '
msg = ('A failure occurred while attempting to format, copy, and '
're-create the configuration drive in a structure which '
'is compatible with the underlying hardware and Operating '
'System. Due to the nature of configuration drive, it could '

View File

@ -128,7 +128,7 @@ def calc_raid_partition_sectors(psize, start):
the unit of measure, compatible with parted.
:param psize: size of the raid partition
:param start: start sector of the raid partion in integer format
:param start: start sector of the raid partition in integer format
:return: start and end sector in parted compatible format, end sector
as integer
"""

View File

@ -389,7 +389,7 @@ Boot0002 VENDMAGIC FvFile(9f3c6294-bf9b-4208-9808-be45dfc34b51)
mock_efi_bl.return_value = ['EFI/BOOT/BOOTX64.EFI']
# NOTE(TheJulia): This test string was derived from a lenovo SR650
# which does do some weird things with additional entries.
# most notabley
# most notably
stdout_msg = """
BootCurrent: 0000
Timeout: 1 seconds

View File

@ -293,7 +293,7 @@ class TestServiceExtension(base.IronicAgentTest):
ports=self.ports, service_version=self.version)
async_result.join()
# NOTE(TheJulia): This remains CLEAN_VERSION_MISMATCH for backwards
# compatability with base.py and API consumers.
# compatibility with base.py and API consumers.
self.assertEqual('CLEAN_VERSION_MISMATCH',
async_result.command_status)

View File

@ -523,7 +523,7 @@ class TestStandbyExtension(base.IronicAgentTest):
'checksum': 'fake-checksum'}),
mock.call.info('Image downloaded from %(image_location)s in '
'%(totaltime)s seconds. Transferred %(size)s '
'bytes. Server originaly reported: %(reported)s.',
'bytes. Server originally reported: %(reported)s.',
{'image_location': mock.ANY,
'totaltime': mock.ANY,
'size': 11,
@ -1401,7 +1401,7 @@ class TestStandbyExtension(base.IronicAgentTest):
'checksum': 'fake-checksum'}),
mock.call.info('Image streamed onto device %(device)s in '
'%(totaltime)s seconds for %(size)s bytes. '
'Server originaly reported %(reported)s.',
'Server originally reported %(reported)s.',
{'device': '/dev/foo',
'totaltime': mock.ANY,
'size': 11,

View File

@ -165,7 +165,7 @@ BLK_INCOMPLETE_DEVICE_TEMPLATE_SMALL = """
# NOTE(dszumski): We include some partitions here to verify that
# they are filtered out when not requested. It is assumed that
# ROTA has been set to 0 on some software RAID devices for testing
# purposes. In practice is appears to inherit from the underyling
# purposes. In practice is appears to inherit from the underlying
# devices, so in this example it would normally be 1.
RAID_BLK_DEVICE_TEMPLATE = ("""
{

View File

@ -464,7 +464,7 @@ class TestBurnin(base.IronicAgentTest):
# we are the first node to enter, so no other host
# initially until the second one appears after some
# interations
# iterations
mock_coordinator.get_members.side_effect = \
[Members(), Members([b'host1']), Members([b'host1']),
Members([b'host1']), Members([b'host1', b'host2'])]

View File

@ -3951,7 +3951,7 @@ class TestGenericHardwareManager(base.IronicAgentTest):
[device1, device2, device3],
[device1, device2, device3]]
# pre-creation validation fails as insufficent number of devices found
# pre-creation validation fails as insufficient number of devices found
error_regex = ("Software RAID configuration is not possible for "
"RAID level 6 with only 3 block devices found.")

View File

@ -228,7 +228,7 @@ class TestInjectOne(base.IronicAgentTest):
with open(self.path, 'rb') as fp:
self.assertEqual(b'content', fp.read())
self.assertEqual(0o602, stat.S_IMODE(os.stat(self.path).st_mode))
# Exising directories do not change their permissions
# Existing directories do not change their permissions
self.assertNotEqual(0o703, stat.S_IMODE(os.stat(self.dirpath).st_mode))
mock_find_and_mount.assert_called_once_with(fl['path'], None,

View File

@ -1501,7 +1501,7 @@ class TestConfigDriveTestRecovery(base.IronicAgentTest):
mock_mkfs.side_effect = processutils.ProcessExecutionError('boom')
self.assertRaisesRegex(
exception.InstanceDeployFailure,
'A failure occured while attempting to format.*',
'A failure occurred while attempting to format.*',
partition_utils._try_build_fat32_config_drive,
self.fake_dev,
self.configdrive_file)

View File

@ -224,7 +224,7 @@ def _find_mount_point(device):
def _check_vmedia_device(vmedia_device_file):
"""Check if a virtual media device appears valid.
Explicitly ignores partitions, actual disks, and other itmes that
Explicitly ignores partitions, actual disks, and other items that
seem unlikely to be virtual media based items being provided
into the running operating system via a BMC.
@ -622,7 +622,7 @@ def _parse_capabilities_str(cap_str):
"""Extract capabilities from string.
:param cap_str: string meant to meet key1:value1,key2:value2 format
:return: a dictionnary
:return: a dictionary
"""
LOG.debug("Parsing capability string %s", cap_str)
capabilities = {}
@ -689,7 +689,7 @@ def get_node_boot_mode(node):
'instance_info/capabilities' of node. Otherwise it directly look for boot
mode hints into
:param node: dictionnary.
:param node: dictionary.
:returns: 'bios' or 'uefi'
"""
instance_info = node.get('instance_info', {})
@ -697,7 +697,7 @@ def get_node_boot_mode(node):
node_caps = parse_capabilities(node.get('properties', {}))
if _is_secure_boot(instance_info_caps, node_caps):
LOG.debug('Deploy boot mode is implicitely uefi for because secure '
LOG.debug('Deploy boot mode is implicitly uefi because secure '
'boot is activated.')
return 'uefi'
@ -770,7 +770,7 @@ def determine_time_method():
:returns: "ntpdate" if ntpdate has been found, "chrony" if chrony
was located, and None if neither are located. If both tools
are present, "chrony" will supercede "ntpdate".
are present, "chrony" will supersede "ntpdate".
"""
try:
execute('chronyd', '-h')

View File

@ -3,7 +3,7 @@ fixes:
- |
Adds an additional check if the ``smartctl`` utility is present from the
``smartmontools`` package, which performs an ATA disk specific check that
should prevent ATA Secure Erase from being performed if a pass-thru
should prevent ATA Secure Erase from being performed if a pass-through
device is detected that requires a non-ATA command signling sequence.
Devices such as these can be `smart` disk interfaces such as
RAID controllers and USB disk adapters, which can cause failures

View File

@ -7,9 +7,9 @@ features:
useful to work around well-intentioned hardware which is auto-populating
all possible device into the UEFI nvram firmware in order to try and help
ensure the machine boots. Except, this can also mean any
explict configuration attempt will fail. Operators needing this bypass
explicit configuration attempt will fail. Operators needing this bypass
can use the ``ipa-ignore-bootloader-failure`` configuration option on the
PXE command line or utilize the ``ignore_bootloader_failure`` option
for the Ramdisk configuration.
In a future version of ironic, this setting may be able to be overriden
In a future version of ironic, this setting may be able to be overridden
by ironic node level configuration.

View File

@ -3,7 +3,7 @@ other:
- |
Adds an explicit capture of connectivity failures in the heartbeat
process to provide a more verbose error message in line with what is
occuring as opposed to just indicating that an error occured. This
occurring as opposed to just indicating that an error occurred. This
new exception is called ``HeartbeatConnectionError`` and is likely only
going to be visible if there is a local connectivity failure such as a
router failure, switchport in a blocking state, or connection centered

View File

@ -12,7 +12,7 @@ deprecations:
as of January 2019, CoreOS:
1) Current CoreOS images require 2GB of RAM to operate.
As a result of the RAM requirement, it is problematic for continious
As a result of the RAM requirement, it is problematic for continuous
integration testing to occur with the CoreOS based Ironic-Python-Agent
image in OpenStack testing infrastructure.

View File

@ -5,6 +5,6 @@ fixes:
root volume uuid detection method via the ``findfs`` utility, which is
also already packaged in most distributions with ``lsblk``.
This fallback was necesary as the ``lsblk`` command in ``TinyCore`` Linux,
This fallback was necessary as the ``lsblk`` command in ``TinyCore`` Linux,
upon which TinyIPA is built, does not return data as expected for
volume UUID values.

View File

@ -7,7 +7,7 @@ fixes:
to handle and account for multipaths based upon the MPIO data available.
This requires the ``multipath`` and ``multipathd`` utility to be present
in the ramdisk. These are supplied by the ``device-mapper-multipath`` or
``multipath-tools`` packages, and are not requried for the agent's use.
``multipath-tools`` packages, and are not required for the agent's use.
- |
Fixes non-ideal behavior when performing cleaning where Active/Active
MPIO devices would ultimately be cleaned once per IO path, instead of

View File

@ -3,7 +3,7 @@ fixes:
- |
Fixes, or at least lessens the case where a running Ironic agent can stack
up numerous lookup requests against an Ironic deployment when a node is
locked. In particular, this is beause the lookup also drives generation of
locked. In particular, this is because the lookup also drives generation of
the agent token, which requires the conductor to allocate a worker, and
generate the token, and return the result to the API client.
Ironic's retry logic will now wait up to ``60`` seconds, and if an HTTP