Merge "conf: Use new-style choice values"

This commit is contained in:
Zuul 2018-10-05 18:32:55 +00:00 committed by Gerrit Code Review
commit 3d01793766
9 changed files with 155 additions and 195 deletions

View File

@ -24,13 +24,15 @@ Options under this group are used to define Nova API.
auth_opts = [
cfg.StrOpt("auth_strategy",
default="keystone",
choices=("keystone", "noauth2"),
choices=[
("keystone", "Use keystone for authentication."),
("noauth2", "Designed for testing only, as it does no actual "
"credential checking. 'noauth2' provides administrative "
"credentials only if 'admin' is specified as the username."),
],
deprecated_group="DEFAULT",
help="""
This determines the strategy to use for authentication: keystone or noauth2.
'noauth2' is designed for testing only, as it does no actual credential
checking. 'noauth2' provides administrative credentials only if 'admin' is
specified as the username.
Determine the strategy to use for authentication.
"""),
cfg.BoolOpt("use_forwarded_for",
default=False,
@ -71,40 +73,33 @@ Possible values:
* Any string that represents zero or more versions, separated by spaces.
"""),
cfg.ListOpt('vendordata_providers',
item_type=cfg.types.String(choices=['StaticJSON', 'DynamicJSON']),
item_type=cfg.types.String(choices=[
('StaticJSON', 'Loads a JSON file from the path configured by '
'``[DEFAULT] vendordata_jsonfile_path`` and use this as the '
'source for ``vendora_data.json`` and ``vendor_data2.json``.'),
('DynamicJSON', 'Build a JSON file using values defined in '
'``vendordata_dynamic_targets``, which is documented separately '
'and uses this as the source for ``vendor_data2.json``.'),
]),
default=['StaticJSON'],
deprecated_group="DEFAULT",
help="""
A list of vendordata providers.
vendordata providers are how deployers can provide metadata via configdrive
and metadata that is specific to their deployment. There are currently two
supported providers: StaticJSON and DynamicJSON.
StaticJSON reads a JSON file configured by the flag vendordata_jsonfile_path
and places the JSON from that file into vendor_data.json and
vendor_data2.json.
DynamicJSON is configured via the vendordata_dynamic_targets flag, which is
documented separately. For each of the endpoints specified in that flag, a
section is added to the vendor_data2.json.
and metadata that is specific to their deployment.
For more information on the requirements for implementing a vendordata
dynamic endpoint, please see the vendordata.rst file in the nova developer
reference.
Possible values:
* A list of vendordata providers, with StaticJSON and DynamicJSON being
current options.
Related options:
* vendordata_dynamic_targets
* vendordata_dynamic_ssl_certfile
* vendordata_dynamic_connect_timeout
* vendordata_dynamic_read_timeout
* vendordata_dynamic_failure_fatal
* ``vendordata_dynamic_targets``
* ``vendordata_dynamic_ssl_certfile``
* ``vendordata_dynamic_connect_timeout``
* ``vendordata_dynamic_read_timeout``
* ``vendordata_dynamic_failure_fatal``
"""),
cfg.ListOpt('vendordata_dynamic_targets',
default=[],
@ -267,8 +262,22 @@ False. If you have many cells, especially if you confine tenants to a
small subset of those cells, this should be True.
"""),
cfg.StrOpt("instance_list_cells_batch_strategy",
choices=("fixed", "distributed"),
default="distributed",
choices=[
("distributed", "``distributed`` will attempt to divide the "
"limit requested by the user by the number of cells in the "
"system. This requires counting the cells in the system "
"initially, which will not be refreshed until service restart "
"or SIGHUP. The actual batch size will be increased by 10% "
"over the result of ($limit / $num_cells)."),
("fixed", "``fixed`` will simply request fixed-size batches from "
"each cell, as defined by ``instance_list_cells_batch_fixed_"
"size``. If the limit is smaller than the batch size, the limit "
"will be used instead. If you do not wish batching to be used "
"at all, setting the fixed size equal to the ``max_limit`` "
"value will cause only one request per cell database to be "
"issued."),
],
help="""
This controls the method by which the API queries cell databases in
smaller batches during large instance list operations. If batching is
@ -281,24 +290,6 @@ processing the results from the database which will not be returned to
the user. Any strategy will yield a batch size of at least 100 records,
to avoid a user causing many tiny database queries in their request.
``distributed`` (the default) will attempt to divide the limit
requested by the user by the number of cells in the system. This
requires counting the cells in the system initially, which will not be
refreshed until service restart or SIGHUP. The actual batch size will
be increased by 10% over the result of ($limit / $num_cells).
``fixed`` will simply request fixed-size batches from each cell, as
defined by ``instance_list_cells_batch_fixed_size``. If the limit is
smaller than the batch size, the limit will be used instead. If you do
not wish batching to be used at all, setting the fixed size equal to
the ``max_limit`` value will cause only one request per cell database
to be issued.
Possible values:
* distributed (default)
* fixed
Related options:
* instance_list_cells_batch_fixed_size

View File

@ -198,7 +198,10 @@ Related options:
"""),
cfg.StrOpt('preallocate_images',
default='none',
choices=('none', 'space'),
choices=[
('none', 'No storage provisioning is done up front'),
('space', 'Storage is fully allocated at instance start')
],
help="""
The image preallocation mode to use.
@ -207,11 +210,6 @@ when the instance is initially provisioned. This ensures immediate feedback is
given if enough space isn't available. In addition, it should significantly
improve performance on writes to new blocks and may even improve I/O
performance to prewritten blocks due to reduced fragmentation.
Possible values:
* "none" => no storage provisioning is done up front
* "space" => storage is fully allocated at instance start
"""),
cfg.BoolOpt('use_cow_images',
default=True,
@ -282,7 +280,12 @@ Unused unresized base images younger than this will not be removed.
"""),
cfg.StrOpt('pointer_model',
default='usbtablet',
choices=[None, 'ps2mouse', 'usbtablet'],
choices=[
('ps2mouse', 'Uses relative movement. Mouse connected by PS2'),
('usbtablet', 'Uses absolute movement. Tablet connect by USB'),
(None, 'Uses default behavior provided by drivers (mouse on PS2 '
'for libvirt x86)'),
],
help="""
Generic property to specify the pointer type.
@ -292,13 +295,6 @@ example to provide a graphic tablet for absolute cursor movement.
If set, the 'hw_pointer_model' image property takes precedence over
this configuration option.
Possible values:
* None: Uses default behavior provided by drivers (mouse on PS2 for
libvirt x86)
* ps2mouse: Uses relative movement. Mouse connected by PS2
* usbtablet: Uses absolute movement. Tablet connect by USB
Related options:
* usbtablet must be configured with VNC enabled or SPICE enabled and SPICE
@ -1058,25 +1054,23 @@ Possible values:
running_deleted_opts = [
cfg.StrOpt("running_deleted_instance_action",
default="reap",
choices=('noop', 'log', 'shutdown', 'reap'),
choices=[
('reap', 'Powers down the instances and deletes them'),
('log', 'Logs warning message about deletion of the resource'),
('shutdown', 'Powers down instances and marks them as '
'non-bootable which can be later used for debugging/analysis'),
('noop', 'Takes no action'),
],
help="""
The compute service periodically checks for instances that have been
deleted in the database but remain running on the compute node. The
above option enables action to be taken when such instances are
identified.
Possible values:
* reap: Powers down the instances and deletes them(default)
* log: Logs warning message about deletion of the resource
* shutdown: Powers down instances and marks them as non-
bootable which can be later used for debugging/analysis
* noop: Takes no action
Related options:
* running_deleted_instance_poll_interval
* running_deleted_instance_timeout
* ``running_deleted_instance_poll_interval``
* ``running_deleted_instance_timeout``
"""),
cfg.IntOpt("running_deleted_instance_poll_interval",
default=1800,
@ -1136,7 +1130,14 @@ Related options:
db_opts = [
cfg.StrOpt('osapi_compute_unique_server_name_scope',
default='',
choices=['', 'project', 'global'],
choices=[
('', 'An empty value means that no uniqueness check is done and '
'duplicate names are possible'),
('project', 'The instance name check is done only for instances '
'within the same project'),
('global', 'The instance name check is done for all instances '
'regardless of the project'),
],
help="""
Sets the scope of the check for unique instance names.
@ -1146,15 +1147,6 @@ duplicate name will result in an ''InstanceExists'' error. The uniqueness is
case-insensitive. Setting this option can increase the usability for end
users as they don't have to distinguish among instances with the same name
by their IDs.
Possible values:
* '': An empty value means that no uniqueness check is done and duplicate
names are possible.
* "project": The instance name check is done only for instances within the
same project.
* "global": The instance name check is done for all instances regardless of
the project.
"""),
cfg.BoolOpt('enable_new_services',
default=True,

View File

@ -17,22 +17,20 @@ from oslo_config import cfg
config_drive_opts = [
cfg.StrOpt('config_drive_format',
default='iso9660',
choices=('iso9660', 'vfat'),
choices=[
('iso9660', 'A file system image standard that is widely '
'supported across operating systems.'),
('vfat', 'Provided for legacy reasons and to enable live '
'migration with the libvirt driver and non-shared storage')],
help="""
Configuration drive format
Configuration drive format that will contain metadata attached to the
instance when it boots.
Possible values:
* iso9660: A file system image standard that is widely supported across
operating systems. NOTE: Mind the libvirt bug
(https://bugs.launchpad.net/nova/+bug/1246201) - If your hypervisor
driver is libvirt, and you want live migrate to work without shared storage,
then use VFAT.
* vfat: For legacy reasons, you can configure the configuration drive to
use VFAT format instead of ISO 9660.
Due to a `libvirt bug <https://bugs.launchpad.net/nova/+bug/1246201>`_, you
should use ``vfat`` if you wish to live migrate and are not using shared
storage.
Related options:

View File

@ -451,21 +451,17 @@ Related options:
* live_migration_permit_post_copy
"""),
cfg.StrOpt('snapshot_image_format',
choices=('raw', 'qcow2', 'vmdk', 'vdi'),
help="""
choices=[
('raw', 'RAW disk format'),
('qcow2', 'KVM default disk format'),
('vmdk', 'VMWare default disk format'),
('vdi', 'VirtualBox default disk format'),
],
help="""
Determine the snapshot image format when sending to the image service.
If set, this decides what format is used when sending the snapshot to the
image service.
If not set, defaults to same type as source image.
Possible values:
* ``raw``: RAW disk format
* ``qcow2``: KVM default disk format
* ``vmdk``: VMWare default disk format
* ``vdi``: VirtualBox default disk format
* If not set, defaults to same type as source image.
image service. If not set, defaults to same type as source image.
"""),
cfg.StrOpt('disk_prefix',
help="""
@ -491,22 +487,20 @@ Related options:
' soft reboot request is made. We fall back to hard reboot'
' if instance does not shutdown within this window.'),
cfg.StrOpt('cpu_mode',
choices=('host-model', 'host-passthrough', 'custom', 'none'),
help="""
choices=[
('host-model', 'Clones the host CPU feature flags'),
('host-passthrough', 'Use the host CPU model exactly'),
('custom', 'Use the CPU model in ``[libvirt]cpu_model``'),
('none', "Don't set a specific CPU model. For instances with "
"``[libvirt] virt_type`` as KVM/QEMU, the default CPU model from "
"QEMU will be used, which provides a basic set of CPU features "
"that are compatible with most hosts"),
],
help="""
Is used to set the CPU mode an instance should have.
If virt_type="kvm|qemu", it will default to "host-model", otherwise it will
default to "none".
Possible values:
* ``host-model``: Clones the host CPU feature flags
* ``host-passthrough``: Use the host CPU model exactly
* ``custom``: Use a named CPU model
* ``none``: Don't set a specific CPU model. For instances with
``virt_type`` as KVM/QEMU, the default CPU model from QEMU will be used,
which provides a basic set of CPU features that are compatible with most
hosts.
If ``virt_type="kvm|qemu"``, it will default to ``host-model``, otherwise it
will default to ``none``.
Related options:
@ -875,18 +869,16 @@ libvirt_imagecache_opts = [
libvirt_lvm_opts = [
cfg.StrOpt('volume_clear',
default='zero',
choices=('none', 'zero', 'shred'),
help="""
default='zero',
choices=[
('zero', 'Overwrite volumes with zeroes'),
('shred', 'Overwrite volume repeatedly'),
('none', 'Do not wipe deleted volumes'),
],
help="""
Method used to wipe ephemeral disks when they are deleted. Only takes effect
if LVM is set as backing storage.
Possible values:
* none - do not wipe deleted volumes
* zero - overwrite volumes with zeroes
* shred - overwrite volume repeatedly
Related options:
* images_type - must be set to ``lvm``

View File

@ -29,7 +29,15 @@ at https://docs.openstack.org/nova/latest/reference/notifications.html
ALL_OPTS = [
cfg.StrOpt(
'notify_on_state_change',
choices=(None, 'vm_state', 'vm_and_task_state'),
choices=[
(None, 'no notifications'),
('vm_state', 'notifications are sent with VM state transition '
'information in the ``old_state`` and ``state`` fields. The '
'``old_task_state`` and ``new_task_state`` fields will be set to '
'the current task_state of the instance'),
('vm_and_task_state', 'notifications are sent with VM and task '
'state transition information'),
],
deprecated_group='DEFAULT',
help="""
If set, send compute.instance.update notifications on
@ -38,16 +46,6 @@ instance state changes.
Please refer to
https://docs.openstack.org/nova/latest/reference/notifications.html for
additional information on notifications.
Possible values:
* None - no notifications
* "vm_state" - notifications are sent with VM state transition information in
the ``old_state`` and ``state`` fields. The ``old_task_state`` and
``new_task_state`` fields will be set to the current task_state of the
instance.
* "vm_and_task_state" - notifications are sent with VM and task state
transition information.
"""),
cfg.StrOpt(
@ -59,8 +57,14 @@ Possible values:
help="Default notification level for outgoing notifications."),
cfg.StrOpt(
'notification_format',
choices=['unversioned', 'versioned', 'both'],
default='both',
choices=[
('both', 'Both the legacy unversioned and the new versioned '
'notifications are emitted'),
('versioned', 'Only the new versioned notifications are emitted'),
('unversioned', 'Only the legacy unversioned notifications are '
'emitted'),
],
deprecated_group='DEFAULT',
help="""
Specifies which notification format shall be used by nova.
@ -73,13 +77,6 @@ will be removed.
Note that notifications can be completely disabled by setting ``driver=noop``
in the ``[oslo_messaging_notifications]`` group.
Possible values:
* unversioned: Only the legacy unversioned notifications are emitted.
* versioned: Only the new versioned notifications are emitted.
* both: Both the legacy unversioned and the new versioned notifications are
emitted. (Default)
The list of versioned notifications is visible in
https://docs.openstack.org/nova/latest/reference/notifications.html
"""),

View File

@ -288,20 +288,18 @@ issues. Note that quotas are not updated on a periodic task, they will update
on a new reservation if max_age has passed since the last reservation.
"""),
cfg.StrOpt('driver',
default='nova.quota.DbQuotaDriver',
choices=('nova.quota.DbQuotaDriver',
'nova.quota.NoopQuotaDriver'),
help="""
default='nova.quota.DbQuotaDriver',
choices=[
('nova.quota.DbQuotaDriver', 'Stores quota limit information '
'in the database and relies on the ``quota_*`` configuration '
'options for default quota limit values. Counts quota usage '
'on-demand.'),
('nova.quota.NoopQuotaDriver', 'Ignores quota and treats all '
'resources as unlimited.'),
],
help="""
Provides abstraction for quota checks. Users can configure a specific
driver to use for quota checks.
Possible values:
* nova.quota.DbQuotaDriver: Stores quota limit information
in the database and relies on the quota_* configuration options for default
quota limit values. Counts quota usage on-demand.
* nova.quota.NoopQuotaDriver: Ignores quota and treats all resources as
unlimited.
"""),
cfg.BoolOpt('recheck_quota',
default=True,

View File

@ -18,7 +18,10 @@ from oslo_config import cfg
SERVICEGROUP_OPTS = [
cfg.StrOpt('servicegroup_driver',
default='db',
choices=['db', 'mc'],
choices=[
('db', 'Database ServiceGroup driver'),
('mc', 'Memcache ServiceGroup driver'),
],
help="""
This option specifies the driver to be used for the servicegroup service.
@ -30,14 +33,9 @@ client driver automatically updates the compute worker status. There are
multiple backend implementations for this service: Database ServiceGroup driver
and Memcache ServiceGroup driver.
Possible Values:
* db : Database ServiceGroup driver
* mc : Memcache ServiceGroup driver
Related Options:
* service_down_time (maximum time since last check-in for up service)
* ``service_down_time`` (maximum time since last check-in for up service)
"""),
]

View File

@ -225,9 +225,10 @@ Related options:
"""),
cfg.ListOpt(
'auth_schemes',
item_type=types.String(
choices=['none', 'vencrypt']
),
item_type=types.String(choices=(
('none', 'Allow connection without authentication'),
('vencrypt', 'Use VeNCrypt authentication scheme'),
)),
default=['none'],
help="""
The authentication schemes to use with the compute node.
@ -237,11 +238,6 @@ the proxy and the compute host. If multiple schemes are enabled, the first
matching scheme will be used, thus the strongest schemes should be listed
first.
Possible values:
* ``none``: allow connection without authentication
* ``vencrypt``: use VeNCrypt authentication scheme
Related options:
* ``[vnc]vencrypt_client_key``, ``[vnc]vencrypt_client_cert``: must also be set

View File

@ -162,7 +162,11 @@ session, which allows you to make concurrent XenAPI connections.
xenapi_vm_utils_opts = [
cfg.StrOpt('cache_images',
default='all',
choices=('all', 'some', 'none'),
choices=[
('all', 'Will cache all images'),
('some', 'Will only cache images that have the image_property '
'``cache_in_nova=True``'),
('none', 'Turns off caching entirely')],
help="""
Cache glance images locally.
@ -170,13 +174,6 @@ The value for this option must be chosen from the choices listed
here. Configuring a value other than these will default to 'all'.
Note: There is nothing that deletes these images.
Possible values:
* `all`: will cache all images.
* `some`: will only cache images that have the
image_property `cache_in_nova=True`.
* `none`: turns off caching entirely.
"""),
cfg.IntOpt('image_compression_level',
min=1,
@ -438,27 +435,28 @@ GlanceStore.
"""),
cfg.StrOpt('image_handler',
default='direct_vhd',
choices=('direct_vhd', 'vdi_local_dev', 'vdi_remote_stream'),
choices=[
('direct_vhd', 'This plugin directly processes the VHD files in '
'XenServer SR(Storage Repository). So this plugin only works '
'when the host\'s SR type is file system based e.g. ext, nfs.'),
('vdi_local_dev', 'This plugin implements an image handler which '
'attaches the instance\'s VDI as a local disk to the VM where '
'the OpenStack Compute service runs in. It uploads the raw disk '
'to glance when creating image; when booting an instance from a '
'glance image, it downloads the image and streams it into the '
'disk which is attached to the compute VM.'),
('vdi_remote_stream', 'This plugin implements an image handler '
'which works as a proxy between glance and XenServer. The VHD '
'streams to XenServer via a remote import API supplied by XAPI '
'for image download; and for image upload, the VHD streams from '
'XenServer via a remote export API supplied by XAPI. This '
'plugin works for all SR types supported by XenServer.'),
],
help="""
The plugin used to handle image uploads and downloads.
Provide a short name representing an image driver required to
handle the image between compute host and glance.
Description for the allowed values:
* ``direct_vhd``: This plugin directly processes the VHD files in XenServer
SR(Storage Repository). So this plugin only works when the host's SR
type is file system based e.g. ext, nfs.
* ``vdi_local_dev``: This plugin implements an image handler which attaches
the instance's VDI as a local disk to the VM where the OpenStack Compute
service runs in. It uploads the raw disk to glance when creating image;
When booting an instance from a glance image, it downloads the image and
streams it into the disk which is attached to the compute VM.
* ``vdi_remote_stream``: This plugin implements an image handler which works
as a proxy between glance and XenServer. The VHD streams to XenServer via
a remote import API supplied by XAPI for image download; and for image
upload, the VHD streams from XenServer via a remote export API supplied
by XAPI. This plugin works for all SR types supported by XenServer.
"""),
]