Make cinder volume optional
The purpose of this patch is primarily to unblock the stable/newton gate and secondarily fix the performance issues faced with large clusters, where Magnum creates a volume per node. In the swarm_atomic and k8s_atomic drivers container images are stored in a dedicated cinder volume per cluster node. It is proven that this architecture can be a scalability bottleneck. Make the use of cinder volumes for container images and opt-in option. If docker-volume-size is not specified no cinder volumes will be created. Before, if docker-volume-size wasn't specified the default value was 25. To use cinder volumes for container storage the user will interact with magnum as before, (meaning the valid values are integers starting from 1). Backport: I3394c62a43bbf950b7cf0b86a71b1d9b0481d68f Conflicts: * magnum/drivers/common/swarm_fedora_template_def.py Edit magnum/drivers/swarm_fedora_atomic_v1/template_def.py instead. * magnum/tests/unit/conductor/handlers/test_k8s_cluster_conductor.py Remove invalid unit test for docker_volume_size. Fix unit test which references the driver class Additionally, remove the use of cinder volumes in functioanal tests. 2nd Backport: Ia3b14603c5fc516b00c862c8b9257e0fd23d4b9e Remove service from manager class for tempest. 3rd Backport: I67f79efd2049c05d36ea56691b664417ed358fd8 Closes-Bug: #1638006 Related-Bug: #1422831 Change-Id: I219f02dc1861bd4b6b9c59ecc6af448d09004f18
This commit is contained in:
parent
6c6aa74ccd
commit
73212dbe39
|
@ -204,11 +204,11 @@ This is a mandatory parameter and there is no default value.
|
|||
is 'None'.
|
||||
|
||||
--docker-volume-size \<docker-volume-size\>
|
||||
The size in GB for the local storage on each server for the Docker
|
||||
daemon to cache the images and host the containers. Cinder volumes
|
||||
provide the storage. The default is 25 GB. For the 'devicemapper'
|
||||
storage driver, the minimum value is 3GB. For the 'overlay' storage
|
||||
driver, the minimum value is 1GB.
|
||||
If specified, container images will be stored in a cinder volume of the
|
||||
specified size in GB. Each cluster node will have a volume attached of
|
||||
the above size. If not specified, images will be stored in the compute
|
||||
instance's local disk. For the 'devicemapper' storage driver, the minimum
|
||||
value is 3GB. For the 'overlay' storage driver, the minimum value is 1GB.
|
||||
|
||||
--docker-storage-driver \<docker-storage-driver\>
|
||||
The name of a driver to manage the storage for the images and the
|
||||
|
@ -353,8 +353,8 @@ Network
|
|||
needed.
|
||||
|
||||
Storage
|
||||
Cinder provides the block storage that is used for both hosting the
|
||||
containers as well as persistent storage for the containers.
|
||||
Cinder provides the block storage that can be used to host the
|
||||
containers and as persistent storage for the containers.
|
||||
|
||||
Security
|
||||
Barbican provides the storage of secrets such as certificates used
|
||||
|
@ -857,14 +857,8 @@ Volume driver (volume-driver)
|
|||
Storage driver (docker-storage-driver)
|
||||
Specified in the ClusterTemplate to select the Docker storage driver. The
|
||||
supported storage drivers are 'devicemapper' and 'overlay', with
|
||||
'devicemapper' being the default. You may get better performance with
|
||||
the overlay driver depending on your use patterns, with the requirement
|
||||
that SELinux must be disabled inside the containers, although it still runs
|
||||
in enforcing mode on the cluster servers. Magnum will create a Cinder volume
|
||||
for each node, mount it on the node and configure it as a logical
|
||||
volume named 'docker'. The Docker daemon will run the selected device
|
||||
driver to manage this logical volume and host the container writable
|
||||
layer there. Refer to the `Storage`_ section for more details.
|
||||
'devicemapper' being the default. Refer to the `Storage`_ section for more
|
||||
details.
|
||||
|
||||
Image (image-id)
|
||||
Specified in the ClusterTemplate to indicate the image to boot the servers.
|
||||
|
@ -1002,15 +996,8 @@ Volume driver (volume-driver)
|
|||
Storage driver (docker-storage-driver)
|
||||
Specified in the ClusterTemplate to select the Docker storage driver. The
|
||||
supported storage driver are 'devicemapper' and 'overlay', with
|
||||
'devicemapper' being the default. You may get better performance with
|
||||
the 'overlay' driver depending on your use patterns, with the requirement
|
||||
that SELinux must be disabled inside the containers, although it still runs
|
||||
in enforcing mode on the cluster servers. Magnum will create a Cinder volume
|
||||
for each node and attach it as a device. Then depending on the driver,
|
||||
additional configuration is performed to make the volume available to
|
||||
the particular driver. For instance, 'devicemapper' uses LVM; therefore
|
||||
Magnum will create physical volume and logical volume using the attached
|
||||
device. Refer to the `Storage`_ section for more details.
|
||||
'devicemapper' being the default. Refer to the `Storage`_ section for more
|
||||
details.
|
||||
|
||||
Image (image-id)
|
||||
Specified in the ClusterTemplate to indicate the image to boot the servers
|
||||
|
@ -1985,25 +1972,32 @@ configured in the Docker daemon through a number of storage options.
|
|||
When the container is removed, the storage allocated to the particular
|
||||
container is also deleted.
|
||||
|
||||
To manage this space in a flexible manner independent of the Nova
|
||||
instance flavor, Magnum creates a separate Cinder block volume for each
|
||||
node in the cluster, mounts it to the node and configures it to be used as
|
||||
ephemeral storage. Users can specify the size of the Cinder volume with
|
||||
the ClusterTemplate attribute 'docker-volume-size'. The default size is 5GB.
|
||||
Currently the block size is fixed at cluster creation time, but future
|
||||
lifecycle operations may allow modifying the block size during the
|
||||
life of the cluster.
|
||||
Magnum can manage the containers' filesystem in two ways, storing them
|
||||
on the local disk of the compute instances or in a separate Cinder block
|
||||
volume for each node in the cluster. In the latter case, Magnum mounts
|
||||
it to the node and configures it to be used as ephemeral storage. Users
|
||||
can specify the size of the Cinder volume with the ClusterTemplate
|
||||
attribute 'docker-volume-size'. Currently the block size is fixed at
|
||||
cluster creation time, but future lifecycle operations may allow
|
||||
modifying the block size during the life of the cluster.
|
||||
|
||||
To use the Cinder block storage, there is a number of Docker
|
||||
storage drivers available. Only 'devicemapper' is supported as the
|
||||
storage driver but other drivers such as 'OverlayFS' are being
|
||||
considered. There are important trade-off between the choices
|
||||
for the storage drivers that should be considered. For instance,
|
||||
'OperlayFS' may offer better performance, but it may not support
|
||||
the filesystem metadata needed to use SELinux, which is required
|
||||
to support strong isolation between containers running in the same
|
||||
cluster. Using the 'devicemapper' driver does allow the use of SELinux.
|
||||
Both local disk and the Cinder block storage can be used with a number
|
||||
of Docker storage drivers available.
|
||||
|
||||
* 'devicemapper': When used with a dedicated Cinder volume it is
|
||||
configured using direct-lvm and offers very good performance. If it's
|
||||
used with the compute instance's local disk , performance is poor
|
||||
because the disk is configured as a loopback device; therefore it's
|
||||
not recommended for production environments. Using the 'devicemapper'
|
||||
driver does allow the use of SELinux.
|
||||
|
||||
* 'overlay': When used with a dedicated Cinder volume offers as good
|
||||
or better performance than devicemapper. If used on the local disk of
|
||||
the compute instance (especially with high IOPS drives) you can get
|
||||
significant performance gains. However, for kernel versions less than
|
||||
4.9, SELinux must be disabled inside the containers resulting in worse
|
||||
container isolation, although it still runs in enforcing mode on the
|
||||
cluster compute instances.
|
||||
|
||||
Persistent storage
|
||||
------------------
|
||||
|
|
|
@ -80,17 +80,19 @@ class K8sFedoraTemplateDefinition(k8s_template_def.K8sTemplateDefinition):
|
|||
|
||||
def get_env_files(self, cluster_template):
|
||||
env_files = []
|
||||
if cluster_template.master_lb_enabled:
|
||||
env_files.append(
|
||||
template_def.COMMON_ENV_PATH + 'with_master_lb.yaml')
|
||||
else:
|
||||
env_files.append(
|
||||
template_def.COMMON_ENV_PATH + 'no_master_lb.yaml')
|
||||
if cluster_template.floating_ip_enabled:
|
||||
env_files.append(
|
||||
template_def.COMMON_ENV_PATH + 'enable_floating_ip.yaml')
|
||||
else:
|
||||
env_files.append(
|
||||
template_def.COMMON_ENV_PATH + 'disable_floating_ip.yaml')
|
||||
|
||||
return env_files
|
||||
if cluster_template.docker_volume_size is None:
|
||||
env_files.append('no_volume.yaml')
|
||||
else:
|
||||
env_files.append('with_volume.yaml')
|
||||
|
||||
if cluster_template.master_lb_enabled:
|
||||
env_files.append('with_master_lb.yaml')
|
||||
else:
|
||||
env_files.append('no_master_lb.yaml')
|
||||
if cluster_template.floating_ip_enabled:
|
||||
env_files.append('enable_floating_ip.yaml')
|
||||
else:
|
||||
env_files.append('disable_floating_ip.yaml')
|
||||
|
||||
return [template_def.COMMON_ENV_PATH + ef for ef in env_files]
|
||||
|
|
|
@ -0,0 +1,4 @@
|
|||
# Environment file to NOT use a cinder volume to store containers
|
||||
resource_registry:
|
||||
"Magnum::Optional::Cinder::Volume": "OS::Heat::None"
|
||||
"Magnum::Optional::Cinder::VolumeAttachment": "OS::Heat::None"
|
|
@ -0,0 +1,4 @@
|
|||
# Environment file to use a cinder volume to store containers
|
||||
resource_registry:
|
||||
"Magnum::Optional::Cinder::Volume": "OS::Cinder::Volume"
|
||||
"Magnum::Optional::Cinder::VolumeAttachment": "OS::Cinder::VolumeAttachment"
|
|
@ -2,30 +2,32 @@
|
|||
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
if [ "$ENABLE_CINDER" == "False" ]; then
|
||||
# FIXME(yuanying): Use ephemeral disk for docker storage
|
||||
# Currently Ironic doesn't support cinder volumes,
|
||||
# so we must use preserved ephemeral disk instead of a cinder volume.
|
||||
device_path=$(readlink -f /dev/disk/by-label/ephemeral0)
|
||||
else
|
||||
attempts=60
|
||||
while [ ${attempts} -gt 0 ]; do
|
||||
device_name=$(ls /dev/disk/by-id | grep ${DOCKER_VOLUME:0:20}$)
|
||||
if [ -n "${device_name}" ]; then
|
||||
break
|
||||
fi
|
||||
echo "waiting for disk device"
|
||||
sleep 0.5
|
||||
udevadm trigger
|
||||
let attempts--
|
||||
done
|
||||
if [ -n "$DOCKER_VOLUME_SIZE" ] && [ "$DOCKER_VOLUME_SIZE" -gt 0 ]; then
|
||||
if [ "$ENABLE_CINDER" == "False" ]; then
|
||||
# FIXME(yuanying): Use ephemeral disk for docker storage
|
||||
# Currently Ironic doesn't support cinder volumes,
|
||||
# so we must use preserved ephemeral disk instead of a cinder volume.
|
||||
device_path=$(readlink -f /dev/disk/by-label/ephemeral0)
|
||||
else
|
||||
attempts=60
|
||||
while [ ${attempts} -gt 0 ]; do
|
||||
device_name=$(ls /dev/disk/by-id | grep ${DOCKER_VOLUME:0:20}$)
|
||||
if [ -n "${device_name}" ]; then
|
||||
break
|
||||
fi
|
||||
echo "waiting for disk device"
|
||||
sleep 0.5
|
||||
udevadm trigger
|
||||
let attempts--
|
||||
done
|
||||
|
||||
if [ -z "${device_name}" ]; then
|
||||
echo "ERROR: disk device does not exist" >&2
|
||||
exit 1
|
||||
if [ -z "${device_name}" ]; then
|
||||
echo "ERROR: disk device does not exist" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
device_path=/dev/disk/by-id/${device_name}
|
||||
fi
|
||||
|
||||
device_path=/dev/disk/by-id/${device_name}
|
||||
fi
|
||||
|
||||
$configure_docker_storage_driver
|
||||
|
|
|
@ -15,9 +15,11 @@ configure_overlay () {
|
|||
|
||||
rm -rf /var/lib/docker/*
|
||||
|
||||
mkfs.xfs -f ${device_path}
|
||||
echo "${device_path} /var/lib/docker xfs defaults 0 0" >> /etc/fstab
|
||||
mount -a
|
||||
if [ -n "$DOCKER_VOLUME_SIZE" ] && [ "$DOCKER_VOLUME_SIZE" -gt 0 ]; then
|
||||
mkfs.xfs -f ${device_path}
|
||||
echo "${device_path} /var/lib/docker xfs defaults 0 0" >> /etc/fstab
|
||||
mount -a
|
||||
fi
|
||||
|
||||
echo "STORAGE_DRIVER=overlay" > /etc/sysconfig/docker-storage-setup
|
||||
|
||||
|
@ -31,8 +33,10 @@ configure_overlay () {
|
|||
configure_devicemapper () {
|
||||
clear_docker_storage_congiguration
|
||||
|
||||
pvcreate -f ${device_path}
|
||||
vgcreate docker ${device_path}
|
||||
if [ -n "$DOCKER_VOLUME_SIZE" ] && [ "$DOCKER_VOLUME_SIZE" -gt 0 ]; then
|
||||
pvcreate -f ${device_path}
|
||||
vgcreate docker ${device_path}
|
||||
|
||||
echo "VG=docker" > /etc/sysconfig/docker-storage-setup
|
||||
echo "VG=docker" > /etc/sysconfig/docker-storage-setup
|
||||
fi
|
||||
}
|
||||
|
|
|
@ -13,6 +13,7 @@ write_files:
|
|||
KUBE_ALLOW_PRIV="$KUBE_ALLOW_PRIV"
|
||||
ENABLE_CINDER="$ENABLE_CINDER"
|
||||
DOCKER_VOLUME="$DOCKER_VOLUME"
|
||||
DOCKER_VOLUME_SIZE="$DOCKER_VOLUME_SIZE"
|
||||
DOCKER_STORAGE_DRIVER="$DOCKER_STORAGE_DRIVER"
|
||||
NETWORK_DRIVER="$NETWORK_DRIVER"
|
||||
FLANNEL_NETWORK_CIDR="$FLANNEL_NETWORK_CIDR"
|
||||
|
|
|
@ -13,6 +13,7 @@ write_files:
|
|||
ETCD_SERVER_IP="$ETCD_SERVER_IP"
|
||||
ENABLE_CINDER="$ENABLE_CINDER"
|
||||
DOCKER_VOLUME="$DOCKER_VOLUME"
|
||||
DOCKER_VOLUME_SIZE="$DOCKER_VOLUME_SIZE"
|
||||
DOCKER_STORAGE_DRIVER="$DOCKER_STORAGE_DRIVER"
|
||||
NETWORK_DRIVER="$NETWORK_DRIVER"
|
||||
REGISTRY_ENABLED="$REGISTRY_ENABLED"
|
||||
|
|
|
@ -92,7 +92,7 @@ parameters:
|
|||
description: >
|
||||
size of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
default: 25
|
||||
default: 0
|
||||
|
||||
docker_storage_driver:
|
||||
type: string
|
||||
|
|
|
@ -230,6 +230,7 @@ resources:
|
|||
"$KUBE_NODE_IP": {get_attr: [kube_master_eth0, fixed_ips, 0, ip_address]}
|
||||
"$KUBE_ALLOW_PRIV": {get_param: kube_allow_priv}
|
||||
"$DOCKER_VOLUME": {get_resource: docker_volume}
|
||||
"$DOCKER_VOLUME_SIZE": {get_param: docker_volume_size}
|
||||
"$DOCKER_STORAGE_DRIVER": {get_param: docker_storage_driver}
|
||||
"$NETWORK_DRIVER": {get_param: network_driver}
|
||||
"$FLANNEL_NETWORK_CIDR": {get_param: flannel_network_cidr}
|
||||
|
@ -442,12 +443,12 @@ resources:
|
|||
#
|
||||
|
||||
docker_volume:
|
||||
type: OS::Cinder::Volume
|
||||
type: Magnum::Optional::Cinder::Volume
|
||||
properties:
|
||||
size: {get_param: docker_volume_size}
|
||||
|
||||
docker_volume_attach:
|
||||
type: OS::Cinder::VolumeAttachment
|
||||
type: Magnum::Optional::Cinder::VolumeAttachment
|
||||
properties:
|
||||
instance_uuid: {get_resource: kube_master}
|
||||
volume_id: {get_resource: docker_volume}
|
||||
|
|
|
@ -227,6 +227,7 @@ resources:
|
|||
$KUBE_NODE_IP: {get_attr: [kube_minion_eth0, fixed_ips, 0, ip_address]}
|
||||
$ETCD_SERVER_IP: {get_param: etcd_server_ip}
|
||||
$DOCKER_VOLUME: {get_resource: docker_volume}
|
||||
$DOCKER_VOLUME_SIZE: {get_param: docker_volume_size}
|
||||
$DOCKER_STORAGE_DRIVER: {get_param: docker_storage_driver}
|
||||
$NETWORK_DRIVER: {get_param: network_driver}
|
||||
$REGISTRY_ENABLED: {get_param: registry_enabled}
|
||||
|
@ -410,12 +411,12 @@ resources:
|
|||
#
|
||||
|
||||
docker_volume:
|
||||
type: OS::Cinder::Volume
|
||||
type: Magnum::Optional::Cinder::Volume
|
||||
properties:
|
||||
size: {get_param: docker_volume_size}
|
||||
|
||||
docker_volume_attach:
|
||||
type: OS::Cinder::VolumeAttachment
|
||||
type: Magnum::Optional::Cinder::VolumeAttachment
|
||||
properties:
|
||||
instance_uuid: {get_resource: kube-minion}
|
||||
volume_id: {get_resource: docker_volume}
|
||||
|
|
|
@ -100,7 +100,7 @@ parameters:
|
|||
description: >
|
||||
size of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
default: 25
|
||||
default: 0
|
||||
|
||||
docker_storage_driver:
|
||||
type: string
|
||||
|
@ -430,6 +430,7 @@ resources:
|
|||
master_flavor: {get_param: master_flavor}
|
||||
external_network: {get_param: external_network}
|
||||
kube_allow_priv: {get_param: kube_allow_priv}
|
||||
docker_volume_size: {get_param: docker_volume_size}
|
||||
docker_storage_driver: {get_param: docker_storage_driver}
|
||||
wait_condition_timeout: {get_param: wait_condition_timeout}
|
||||
network_driver: {get_param: network_driver}
|
||||
|
@ -486,6 +487,7 @@ resources:
|
|||
etcd_server_ip: {get_attr: [etcd_address_switch, private_ip]}
|
||||
external_network: {get_param: external_network}
|
||||
kube_allow_priv: {get_param: kube_allow_priv}
|
||||
docker_volume_size: {get_param: docker_volume_size}
|
||||
docker_storage_driver: {get_param: docker_storage_driver}
|
||||
wait_condition_timeout: {get_param: wait_condition_timeout}
|
||||
registry_enabled: {get_param: registry_enabled}
|
||||
|
|
|
@ -35,6 +35,12 @@ parameters:
|
|||
constraints:
|
||||
- allowed_values: ["true", "false"]
|
||||
|
||||
docker_volume_size:
|
||||
type: number
|
||||
description: >
|
||||
size of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
|
||||
docker_storage_driver:
|
||||
type: string
|
||||
description: docker storage driver name
|
||||
|
@ -222,6 +228,7 @@ resources:
|
|||
"$KUBE_API_PORT": {get_param: kubernetes_port}
|
||||
"$KUBE_ALLOW_PRIV": {get_param: kube_allow_priv}
|
||||
"$DOCKER_VOLUME": 'None'
|
||||
"$DOCKER_VOLUME_SIZE": {get_param: docker_volume_size}
|
||||
"$DOCKER_STORAGE_DRIVER": {get_param: docker_storage_driver}
|
||||
"$NETWORK_DRIVER": {get_param: network_driver}
|
||||
"$FLANNEL_NETWORK_CIDR": {get_param: flannel_network_cidr}
|
||||
|
|
|
@ -30,6 +30,12 @@ parameters:
|
|||
constraints:
|
||||
- allowed_values: ["true", "false"]
|
||||
|
||||
docker_volume_size:
|
||||
type: number
|
||||
description: >
|
||||
size of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
|
||||
docker_storage_driver:
|
||||
type: string
|
||||
description: docker storage driver name
|
||||
|
@ -219,6 +225,7 @@ resources:
|
|||
$KUBE_API_PORT: {get_param: kubernetes_port}
|
||||
$ETCD_SERVER_IP: {get_param: etcd_server_ip}
|
||||
$DOCKER_VOLUME: 'None'
|
||||
$DOCKER_VOLUME_SIZE: {get_param: docker_volume_size}
|
||||
$DOCKER_STORAGE_DRIVER: {get_param: docker_storage_driver}
|
||||
$NETWORK_DRIVER: {get_param: network_driver}
|
||||
$REGISTRY_ENABLED: {get_param: registry_enabled}
|
||||
|
|
|
@ -118,10 +118,19 @@ class AtomicSwarmTemplateDefinition(template_def.BaseTemplateDefinition):
|
|||
**kwargs)
|
||||
|
||||
def get_env_files(self, cluster_template):
|
||||
if cluster_template.master_lb_enabled:
|
||||
return [template_def.COMMON_ENV_PATH + 'with_master_lb.yaml']
|
||||
env_files = []
|
||||
|
||||
if cluster_template.docker_volume_size is None:
|
||||
env_files.append('no_volume.yaml')
|
||||
else:
|
||||
return [template_def.COMMON_ENV_PATH + 'no_master_lb.yaml']
|
||||
env_files.append('with_volume.yaml')
|
||||
|
||||
if cluster_template.master_lb_enabled:
|
||||
env_files.append('with_master_lb.yaml')
|
||||
else:
|
||||
env_files.append('no_master_lb.yaml')
|
||||
|
||||
return [template_def.COMMON_ENV_PATH + ef for ef in env_files]
|
||||
|
||||
@property
|
||||
def driver_module_path(self):
|
||||
|
|
|
@ -118,7 +118,7 @@ parameters:
|
|||
description: >
|
||||
size of a cinder volume to allocate to docker for container/image
|
||||
storage
|
||||
default: 25
|
||||
default: 0
|
||||
|
||||
docker_storage_driver:
|
||||
type: string
|
||||
|
|
|
@ -10,6 +10,7 @@ write_files:
|
|||
WAIT_CURL="$WAIT_CURL"
|
||||
ETCD_DISCOVERY_URL="$ETCD_DISCOVERY_URL"
|
||||
DOCKER_VOLUME="$DOCKER_VOLUME"
|
||||
DOCKER_VOLUME_SIZE="$DOCKER_VOLUME_SIZE"
|
||||
DOCKER_STORAGE_DRIVER="$DOCKER_STORAGE_DRIVER"
|
||||
HTTP_PROXY="$HTTP_PROXY"
|
||||
HTTPS_PROXY="$HTTPS_PROXY"
|
||||
|
|
|
@ -9,6 +9,7 @@ write_files:
|
|||
WAIT_HANDLE_TOKEN="$WAIT_HANDLE_TOKEN"
|
||||
WAIT_CURL="$WAIT_CURL"
|
||||
DOCKER_VOLUME="$DOCKER_VOLUME"
|
||||
DOCKER_VOLUME_SIZE="$DOCKER_VOLUME_SIZE"
|
||||
DOCKER_STORAGE_DRIVER="$DOCKER_STORAGE_DRIVER"
|
||||
HTTP_PROXY="$HTTP_PROXY"
|
||||
HTTPS_PROXY="$HTTPS_PROXY"
|
||||
|
|
|
@ -204,6 +204,7 @@ resources:
|
|||
"$WAIT_HANDLE_TOKEN": {get_attr: [master_wait_handle, token]}
|
||||
"$WAIT_CURL": {get_attr: [master_wait_handle, curl_cli]}
|
||||
"$DOCKER_VOLUME": {get_resource: docker_volume}
|
||||
"$DOCKER_VOLUME_SIZE": {get_param: docker_volume_size}
|
||||
"$DOCKER_STORAGE_DRIVER": {get_param: docker_storage_driver}
|
||||
"$ETCD_DISCOVERY_URL": {get_param: discovery_url}
|
||||
"$HTTP_PROXY": {get_param: http_proxy}
|
||||
|
@ -437,12 +438,12 @@ resources:
|
|||
#
|
||||
|
||||
docker_volume:
|
||||
type: OS::Cinder::Volume
|
||||
type: Magnum::Optional::Cinder::Volume
|
||||
properties:
|
||||
size: {get_param: docker_volume_size}
|
||||
|
||||
docker_volume_attach:
|
||||
type: OS::Cinder::VolumeAttachment
|
||||
type: Magnum::Optional::Cinder::VolumeAttachment
|
||||
properties:
|
||||
instance_uuid: {get_resource: swarm_master}
|
||||
volume_id: {get_resource: docker_volume}
|
||||
|
|
|
@ -189,6 +189,7 @@ resources:
|
|||
"$WAIT_HANDLE_TOKEN": {get_attr: [node_wait_handle, token]}
|
||||
"$WAIT_CURL": {get_attr: [node_wait_handle, curl_cli]}
|
||||
"$DOCKER_VOLUME": {get_resource: docker_volume}
|
||||
"$DOCKER_VOLUME_SIZE": {get_param: docker_volume_size}
|
||||
"$DOCKER_STORAGE_DRIVER": {get_param: docker_storage_driver}
|
||||
"$HTTP_PROXY": {get_param: http_proxy}
|
||||
"$HTTPS_PROXY": {get_param: https_proxy}
|
||||
|
@ -385,12 +386,12 @@ resources:
|
|||
#
|
||||
|
||||
docker_volume:
|
||||
type: OS::Cinder::Volume
|
||||
type: Magnum::Optional::Cinder::Volume
|
||||
properties:
|
||||
size: {get_param: docker_volume_size}
|
||||
|
||||
docker_volume_attach:
|
||||
type: OS::Cinder::VolumeAttachment
|
||||
type: Magnum::Optional::Cinder::VolumeAttachment
|
||||
properties:
|
||||
instance_uuid: {get_resource: swarm_node}
|
||||
volume_id: {get_resource: docker_volume}
|
||||
|
|
|
@ -107,7 +107,6 @@ def baymodel_data(**kwargs):
|
|||
"tls_disabled": False,
|
||||
"network_driver": None,
|
||||
"volume_driver": None,
|
||||
"docker_volume_size": 3,
|
||||
"labels": {},
|
||||
"public": False,
|
||||
"fixed_network": "192.168.0.0/24",
|
||||
|
@ -222,7 +221,7 @@ def valid_swarm_baymodel(is_public=False):
|
|||
dns_nameserver=config.Config.dns_nameserver,
|
||||
master_flavor_id=config.Config.master_flavor_id,
|
||||
keypair_id=config.Config.keypair_id, coe="swarm",
|
||||
docker_volume_size=3, cluster_distro=None,
|
||||
cluster_distro=None,
|
||||
external_network_id=config.Config.nic_id,
|
||||
http_proxy=None, https_proxy=None, no_proxy=None,
|
||||
network_driver=None, volume_driver=None, labels={},
|
||||
|
@ -350,7 +349,6 @@ def cluster_template_data(**kwargs):
|
|||
"tls_disabled": False,
|
||||
"network_driver": None,
|
||||
"volume_driver": None,
|
||||
"docker_volume_size": 3,
|
||||
"labels": {},
|
||||
"public": False,
|
||||
"fixed_network": "192.168.0.0/24",
|
||||
|
@ -503,7 +501,7 @@ def valid_swarm_cluster_template(is_public=False):
|
|||
dns_nameserver=config.Config.dns_nameserver,
|
||||
master_flavor_id=master_flavor_id,
|
||||
keypair_id=config.Config.keypair_id,
|
||||
coe="swarm", docker_volume_size=3,
|
||||
coe="swarm",
|
||||
cluster_distro=None,
|
||||
external_network_id=config.Config.nic_id,
|
||||
http_proxy=None, https_proxy=None,
|
||||
|
|
|
@ -28,7 +28,7 @@ class Manager(clients.Manager):
|
|||
if not credentials:
|
||||
credentials = common_creds.get_configured_credentials(
|
||||
'identity_admin')
|
||||
super(Manager, self).__init__(credentials, 'container-infra')
|
||||
super(Manager, self).__init__(credentials)
|
||||
self.auth_provider.orig_base_url = self.auth_provider.base_url
|
||||
self.auth_provider.base_url = self.bypassed_base_url
|
||||
auth = self.auth_provider
|
||||
|
|
|
@ -136,7 +136,6 @@ class BaseMagnumClient(base.BaseMagnumTest):
|
|||
# Plan is to support other kinds of ClusterTemplate
|
||||
# creation.
|
||||
coe = kwargs.pop('coe', 'kubernetes')
|
||||
docker_volume_size = kwargs.pop('docker_volume_size', 3)
|
||||
network_driver = kwargs.pop('network_driver', 'flannel')
|
||||
volume_driver = kwargs.pop('volume_driver', 'cinder')
|
||||
labels = kwargs.pop('labels', {"K1": "V1", "K2": "V2"})
|
||||
|
@ -151,7 +150,6 @@ class BaseMagnumClient(base.BaseMagnumTest):
|
|||
image_id=cls.image_id,
|
||||
flavor_id=cls.flavor_id,
|
||||
master_flavor_id=cls.master_flavor_id,
|
||||
docker_volume_size=docker_volume_size,
|
||||
network_driver=network_driver,
|
||||
volume_driver=volume_driver,
|
||||
dns_nameserver=cls.dns_nameserver,
|
||||
|
|
|
@ -543,7 +543,6 @@ class TestPost(api_base.FunctionalTest):
|
|||
self._create_baymodel_raises_app_error(coe='osomatsu')
|
||||
|
||||
def test_create_baymodel_with_invalid_docker_volume_size(self):
|
||||
self._create_baymodel_raises_app_error(docker_volume_size=0)
|
||||
self._create_baymodel_raises_app_error(docker_volume_size=-1)
|
||||
self._create_baymodel_raises_app_error(
|
||||
docker_volume_size=1,
|
||||
|
|
|
@ -576,7 +576,6 @@ class TestPost(api_base.FunctionalTest):
|
|||
self._create_model_raises_app_error(coe='osomatsu')
|
||||
|
||||
def test_create_cluster_template_with_invalid_docker_volume_size(self):
|
||||
self._create_model_raises_app_error(docker_volume_size=0)
|
||||
self._create_model_raises_app_error(docker_volume_size=-1)
|
||||
self._create_model_raises_app_error(
|
||||
docker_volume_size=1,
|
||||
|
|
|
@ -183,7 +183,8 @@ class TestClusterConductorWithK8s(base.TestCase):
|
|||
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/no_master_lb.yaml',
|
||||
['../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/no_master_lb.yaml',
|
||||
'../../common/templates/environments/disable_floating_ip.yaml'],
|
||||
env_files)
|
||||
|
||||
|
@ -255,7 +256,72 @@ class TestClusterConductorWithK8s(base.TestCase):
|
|||
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/no_master_lb.yaml',
|
||||
['../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/no_master_lb.yaml',
|
||||
'../../common/templates/environments/disable_floating_ip.yaml'],
|
||||
env_files)
|
||||
|
||||
@patch('requests.get')
|
||||
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
|
||||
def test_extract_template_definition_only_required(
|
||||
self,
|
||||
mock_objects_cluster_template_get_by_uuid,
|
||||
mock_get):
|
||||
|
||||
not_required = ['image_id', 'flavor_id', 'dns_nameserver',
|
||||
'docker_volume_size', 'fixed_network', 'http_proxy',
|
||||
'https_proxy', 'no_proxy', 'network_driver',
|
||||
'master_flavor_id', 'docker_storage_driver',
|
||||
'volume_driver']
|
||||
for key in not_required:
|
||||
self.cluster_template_dict[key] = None
|
||||
self.cluster_dict['discovery_url'] = 'https://discovery.etcd.io/test'
|
||||
|
||||
cluster_template = objects.ClusterTemplate(
|
||||
self.context, **self.cluster_template_dict)
|
||||
mock_objects_cluster_template_get_by_uuid.return_value = \
|
||||
cluster_template
|
||||
expected_result = str('{"action":"get","node":{"key":"test","value":'
|
||||
'"1","modifiedIndex":10,"createdIndex":10}}')
|
||||
mock_resp = mock.MagicMock()
|
||||
mock_resp.text = expected_result
|
||||
mock_get.return_value = mock_resp
|
||||
cluster = objects.Cluster(self.context, **self.cluster_dict)
|
||||
|
||||
(template_path,
|
||||
definition,
|
||||
env_files) = cluster_conductor._extract_template_definition(
|
||||
self.context, cluster)
|
||||
|
||||
expected = {
|
||||
'auth_url': 'http://192.168.10.10:5000/v3',
|
||||
'cluster_uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
|
||||
'discovery_url': 'https://discovery.etcd.io/test',
|
||||
'external_network': 'external_network_id',
|
||||
'flannel_backend': 'vxlan',
|
||||
'flannel_network_cidr': '10.101.0.0/16',
|
||||
'flannel_network_subnetlen': '26',
|
||||
'insecure_registry_url': '10.0.0.1:5000',
|
||||
'kube_version': 'fake-version',
|
||||
'magnum_url': 'http://127.0.0.1:9511/v1',
|
||||
'number_of_masters': 1,
|
||||
'number_of_minions': 1,
|
||||
'region_name': 'RegionOne',
|
||||
'registry_enabled': False,
|
||||
'ssh_key_name': 'keypair_id',
|
||||
'tenant_name': 'fake_tenant',
|
||||
'tls_disabled': False,
|
||||
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
|
||||
'trustee_domain_id': 'trustee_domain_id',
|
||||
'trustee_password': 'fake_trustee_password',
|
||||
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
|
||||
'trustee_username': 'fake_trustee',
|
||||
'username': 'fake_user'
|
||||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/no_volume.yaml',
|
||||
'../../common/templates/environments/no_master_lb.yaml',
|
||||
'../../common/templates/environments/disable_floating_ip.yaml'],
|
||||
env_files)
|
||||
|
||||
|
@ -408,17 +474,6 @@ class TestClusterConductorWithK8s(base.TestCase):
|
|||
mock_get,
|
||||
missing_attr='flavor_id')
|
||||
|
||||
@patch('requests.get')
|
||||
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
|
||||
def test_extract_template_definition_without_docker_volume_size(
|
||||
self,
|
||||
mock_objects_cluster_template_get_by_uuid,
|
||||
mock_get):
|
||||
self._test_extract_template_definition(
|
||||
mock_objects_cluster_template_get_by_uuid,
|
||||
mock_get,
|
||||
missing_attr='docker_volume_size')
|
||||
|
||||
@patch('requests.get')
|
||||
@patch('magnum.objects.ClusterTemplate.get_by_uuid')
|
||||
def test_extract_template_definition_without_docker_storage_driver(
|
||||
|
@ -537,7 +592,8 @@ class TestClusterConductorWithK8s(base.TestCase):
|
|||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/no_master_lb.yaml',
|
||||
['../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/no_master_lb.yaml',
|
||||
'../../common/templates/environments/disable_floating_ip.yaml'],
|
||||
env_files)
|
||||
reqget.assert_called_once_with('http://etcd/test?size=1')
|
||||
|
|
|
@ -136,7 +136,8 @@ class TestClusterConductorWithSwarm(base.TestCase):
|
|||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/no_master_lb.yaml'],
|
||||
['../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/no_master_lb.yaml'],
|
||||
env_files)
|
||||
|
||||
@patch('requests.get')
|
||||
|
@ -203,7 +204,8 @@ class TestClusterConductorWithSwarm(base.TestCase):
|
|||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/no_master_lb.yaml'],
|
||||
['../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/no_master_lb.yaml'],
|
||||
env_files)
|
||||
|
||||
@patch('requests.get')
|
||||
|
@ -262,7 +264,8 @@ class TestClusterConductorWithSwarm(base.TestCase):
|
|||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/no_master_lb.yaml'],
|
||||
['../../common/templates/environments/no_volume.yaml',
|
||||
'../../common/templates/environments/no_master_lb.yaml'],
|
||||
env_files)
|
||||
|
||||
@patch('requests.get')
|
||||
|
@ -323,7 +326,8 @@ class TestClusterConductorWithSwarm(base.TestCase):
|
|||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/with_master_lb.yaml'],
|
||||
['../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/with_master_lb.yaml'],
|
||||
env_files)
|
||||
|
||||
@patch('requests.get')
|
||||
|
@ -385,7 +389,8 @@ class TestClusterConductorWithSwarm(base.TestCase):
|
|||
}
|
||||
self.assertEqual(expected, definition)
|
||||
self.assertEqual(
|
||||
['../../common/templates/environments/with_master_lb.yaml'],
|
||||
['../../common/templates/environments/with_volume.yaml',
|
||||
'../../common/templates/environments/with_master_lb.yaml'],
|
||||
env_files)
|
||||
|
||||
@patch('magnum.conductor.utils.retrieve_cluster_template')
|
||||
|
|
Loading…
Reference in New Issue