options: debug: type: boolean default: False description: Enable debug level logging. log-headers: type: boolean default: False description: Enable logging of all request headers. openstack-origin: type: string default: bobcat description: | Repository from which to install. May be one of the following: distro (default), ppa:somecustom/ppa, a deb url sources entry, or a supported Ubuntu Cloud Archive e.g. . cloud:- cloud:-/updates cloud:-/staging cloud:-/proposed . See https://wiki.ubuntu.com/OpenStack/CloudArchive for info on which cloud archives are available and supported. . NOTE: updating this setting to a source that is known to provide a later version of OpenStack will trigger a software upgrade unless action-managed-upgrade is set to True. action-managed-upgrade: type: boolean default: False description: | If True enables openstack upgrades for this charm via juju actions. You will still need to set openstack-origin to the new repository but instead of an upgrade running automatically across all units, it will wait for you to execute the openstack-upgrade action for this charm on each unit. If False it will revert to existing behavior of upgrading all units on config change. harden: type: string default: description: | Apply system hardening. Supports a space-delimited list of modules to run. Supported modules currently include os, ssh, apache and mysql. # General Swift Proxy config region: type: string default: RegionOne description: OpenStack region that this swift-proxy supports. bind-port: type: int default: 8080 description: TCP port to listen on. workers: type: int default: 0 description: | Number of TCP workers to launch (0 for the number of system cores). operator-roles: type: string default: "Member,Admin" description: Comma-separated list of Swift operator roles. auth-type: type: string default: tempauth description: Auth method to use, tempauth, swauth or keystone. Note that swauth is not supported for OpenStack Train and later. swauth-admin-key: type: string default: description: The secret key to use to authenticate as an swauth admin Note that swauth is not supported for OpenStack Train and later. delay-auth-decision: type: boolean default: true description: Delay authentication to downstream WSGI services. node-timeout: type: int default: 60 description: | How long the proxy server will wait on responses from the account/container/object servers. recoverable-node-timeout: type: int default: 30 description: | How long the proxy server will wait for an initial response and to read a chunk of data from the object servers while serving GET / HEAD requests. Timeouts from these requests can be recovered from so setting this to something lower than node-timeout would provide quicker error recovery while allowing for a longer timeout for non-recoverable requests (PUTs). # Swift ring management config partition-power: type: int default: 16 description: | This value needs to be set according to the parameters of the cluster being deployed. In order to achieve an optimal distribution of objects within your cluster without over consuming system resources it is important that this value not be too low or high but it must also be high enough to account for future expansion of your cluster since it cannot be changed once the rings have been built. A rough calculation for this value should be no less than log2(total_disks * 100). replicas: type: int default: 3 description: Minimum replicas for each item stored in the cluster. replicas-account: type: int default: description: | Minimum replicas for each account stored in the cluster. . NOTE: use only when you want to overwrite the global 'replicas' option. replicas-container: type: int default: description: | Minimum replicas for each container stored in the cluster. . NOTE: use only when you want to overwrite the global 'replicas' option. min-hours: type: int default: 0 description: | This is the Swift ring builder min_part_hours parameter. This setting represents the amount of time in hours that Swift will wait between subsequent ring re-balances in order to avoid large i/o loads as data is re-balanced when new devices are added to the cluster. Once your cluster has been built, you can set this to a higher value e.g. 1 (upstream default). Note that changing this value will result in an attempt to re-balance and if successful, rings will be redistributed. disable-ring-balance: type: boolean default: False description: | This provides similar support to min-hours but without having to modify the builders. If True, any changes to the builders will not result in a ring re-balance and sync until this value is set back to False. zone-assignment: type: string default: "manual" description: | Which policy to use when assigning new storage nodes to zones. . manual - Allow swift-storage services to request zone membership. auto - Assign new swift-storage units to zones automatically. . The configured replica minimum must be met by an equal number of storage zones before the storage ring will be initially balance. Deployment requirements differ based on the zone-assignment policy configured, see this charm's README for details. # Manual Keystone config keystone-auth-host: type: string default: description: Keystone authentication host keystone-auth-port: default: 35357 type: int description: Keystone authentication port keystone-auth-protocol: default: http type: string description: Keystone authentication protocol keystone-admin-tenant-name: default: service type: string description: Keystone admin tenant name keystone-admin-user: type: string default: description: Keystone admin username keystone-admin-password: type: string default: description: Keystone admin password # HA config swift-hash: type: string default: description: Hash to use across all swift-proxy servers - don't lose dns-ha: type: boolean default: False description: | Use DNS HA with MAAS 2.0. Note if this is set do not set vip settings below. vip: type: string default: description: | Virtual IP(s) to use to front API services in HA configuration. . If multiple networks are being used, a VIP should be provided for each network, separated by spaces. ha-bindiface: type: string default: eth0 description: | Default network interface on which HA cluster will bind to communication with the other members of the HA Cluster. ha-mcastport: type: int default: 5414 description: | Default multicast port number that will be used to communicate between HA Cluster nodes. haproxy-server-timeout: type: int default: description: | Server timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used. haproxy-client-timeout: type: int default: description: | Client timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used. haproxy-queue-timeout: type: int default: description: | Queue timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used. haproxy-connect-timeout: type: int default: description: | Connect timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used. # Network config (by default all access is over 'private-address') os-admin-network: type: string default: description: | The IP address and netmask of the OpenStack Admin network (e.g. 192.168.0.0/24) . This network will be used for admin endpoints. os-internal-network: type: string default: description: | The IP address and netmask of the OpenStack Internal network (e.g. 192.168.0.0/24) . This network will be used for internal endpoints. os-public-network: type: string default: description: | The IP address and netmask of the OpenStack Public network (e.g., 192.168.0.0/24) . This network will be used for public endpoints. os-public-hostname: type: string default: description: | The hostname or address of the public endpoints created for swift-proxy in the keystone identity provider. This value will be used for public endpoints. For example, an os-public-hostname set to 'files.example.com' with will create the following public endpoint for the swift-proxy: https://files.example.com:80/swift/v1 os-internal-hostname: type: string default: description: | The hostname or address of the internal endpoints created for swift-proxy in the keystone identity provider. . This value will be used for internal endpoints. For example, an os-internal-hostname set to 'files.internal.example.com' with will create the following internal endpoint for the swift-proxy: . https://files.internal.example.com:80/swift/v1 os-admin-hostname: type: string default: description: | The hostname or address of the admin endpoints created for swift-proxy in the keystone identity provider. . This value will be used for admin endpoints. For example, an os-admin-hostname set to 'files.admin.example.com' with will create the following admin endpoint for the swift-proxy: . https://files.admin.example.com:80/swift/v1 prefer-ipv6: type: boolean default: False description: | If True enables IPv6 support. The charm will expect network interfaces to be configured with an IPv6 address. If set to False (default) IPv4 is expected. . NOTE: these charms do not currently support IPv6 privacy extension. In order for this charm to function correctly, the privacy extension must be disabled and a non-temporary address must be configured/available on your network interface. ssl_cert: type: string default: description: | Base64 encoded SSL certificate to install and use for API ports. . juju config swift-proxy ssl_cert="$(cat cert | base64)" \ ssl_key="$(cat key | base64)" . Setting this value (and ssl_key) will enable reverse proxying, point Swifts's entry in the Keystone catalog to use https, and override any certficiate and key issued by Keystone (if it is configured to do so). ssl_key: type: string default: description: | Base64 encoded SSL key to use with certificate specified as ssl_cert. ssl_ca: type: string default: description: | Base64-encoded SSL CA to use with the certificate and key provided - only required if you are providing a privately signed ssl_cert and ssl_key. # Monitoring config nagios_context: type: string default: "juju" description: | Used by the nrpe-external-master subordinate charm. A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like 'juju-myservice-0'. If you are running multiple environments with the same services in them this allows you to differentiate between them. nagios_servicegroups: type: string default: "" description: | A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup. rabbit-user: type: string default: swift description: Username used to access rabbitmq queue. rabbit-vhost: type: string default: openstack description: Rabbitmq vhost name. statsd-host: default: '' type: string description: | Enable statsd metrics to be sent to the specified host. If this value is empty, statsd logging will be disabled. statsd-port: default: 3125 type: int description: | Destination port on the provided statsd host to send samples to. Only takes effect if statsd-host is set. statsd-sample-rate: default: 1.0 type: float description: | Sample rate determines what percentage of the metric points a client should send to the server. Only takes effect if statsd-host is set. static-large-object-segments: default: 0 type: int description: | Enable Static Large Objects (SLO) support. This allows the user to upload several object segments concurrently, after which a manifest is uploaded that describes how to concatenate them, enabling a single large object to be downloaded. . This option sets the maximum number of object segments allowed per large object, allowing control over the maximum large object size. The default minimum segment size is 1MB, while the maximum segment size corresponds to the largest object swift is configured to support (5GB by default). . Ex. Setting this to 1000 would allow up to 1000 5GB object segments to be uploaded for a maximum large object size of 5TB. enable-multi-region: type: boolean default: False description: | Enables Swift Global Cluster feature as described at https://docs.openstack.org/swift/latest/overview_global_cluster.html Should be used in conjunction with 'read-affinity', 'write-affinity' and 'write-affinity-node-count' options. read-affinity: type: string default: description: | Which backend servers to prefer on reads. Format is r for region N or rz for region N, zone M. The value after the equals is the priority; lower numbers are higher priority. . For example first read from region 1 zone 1, then region 1 zone 2, then anything in region 2, then everything else - read_affinity = r1z1=100, r1z2=200, r2=300 . Default is empty, meaning no preference. . NOTE: use only when 'enable-multi-region=True' write-affinity: type: string default: description: | This setting lets you trade data distribution for throughput. It makes the proxy server prefer local back-end servers for object PUT requests over non-local ones. Note that only object PUT requests are affected by the write_affinity setting; POST, GET, HEAD, DELETE, OPTIONS, and account/container PUT requests are not affected. The format is r for region N. If this is set, then when handling an object PUT request, some number (see the write_affinity_node_count setting) of local backend servers will be tried before any nonlocal ones. . For example try to write to regions 1 and 2 before writing to any other nodes - write_affinity = r1, r2 . NOTE: use only when 'enable-multi-region=True' write-affinity-node-count: type: string default: description: | This setting is only useful in conjunction with write_affinity; it governs how many local object servers will be tried before falling back to non-local ones. . For example assuming 3 replicas and 'write-affinity: r1' then 'write-affinity-node-count: 2 * replicas' will make object PUTs try storing the object’s replicas on up to 6 disks. . NOTE: use only when 'enable-multi-region=True' use-policyd-override: type: boolean default: False description: | If True then use the resource file named 'policyd-override' to install override YAML files in the service's policy.d directory. The resource file should be a ZIP file containing at least one yaml file with a .yaml or .yml extension. If False then remove the overrides.