Add support for specifying pod resource limits

We currently allow users to specify pod resource requests and limits
for cpu, ram, and ephemeral storage.  But if a user specifies one of
these, the value is used for both the request and the limit.

This updates the specification to allow the use of separate request
and limit values.

It also normalizes related behavior across all 3 pod drivers,
including adding resource reporting to the openshift drivers.

Change-Id: I49f918b01f83d6fd0fd07f61c3e9a975aa8e59fb
This commit is contained in:
James E. Blair 2023-02-11 12:11:50 -08:00
parent 9bf44b4a4c
commit 669552f6f9
16 changed files with 810 additions and 39 deletions

View File

@ -144,6 +144,33 @@ Selecting the kubernetes driver adds the following options to the
:attr:`providers.[kubernetes].pools.labels.storage` for all labels of
this pool that do not set their own value.
.. attr:: default-label-cpu-limit
:type: int
Only used by the
:value:`providers.[kubernetes].pools.labels.type.pod` label type;
specifies specifies a default value for
:attr:`providers.[kubernetes].pools.labels.cpu-limit` for all labels of
this pool that do not set their own value.
.. attr:: default-label-memory-limit
:type: int
Only used by the
:value:`providers.[kubernetes].pools.labels.type.pod` label type;
specifies a default value in MiB for
:attr:`providers.[kubernetes].pools.labels.memory-limit` for all labels of
this pool that do not set their own value.
.. attr:: default-label-storage-limit
:type: int
Only used by the
:value:`providers.[kubernetes].pools.labels.type.pod` label type;
specifies a default value in MB for
:attr:`providers.[kubernetes].pools.labels.storage-limit` for all labels of
this pool that do not set their own value.
.. attr:: labels
:type: list
@ -226,22 +253,50 @@ Selecting the kubernetes driver adds the following options to the
:type: int
Only used by the
:value:`providers.[kubernetes].pools.labels.type.pod` label type;
specifies the number of cpu to request for the pod.
:value:`providers.[kubernetes].pools.labels.type.pod`
label type; specifies the number of cpu to request for the
pod. If no limit is specified, this will also be used as
the limit.
.. attr:: memory
:type: int
Only used by the
:value:`providers.[kubernetes].pools.labels.type.pod` label type;
specifies the amount of memory in MiB to request for the pod.
:value:`providers.[kubernetes].pools.labels.type.pod`
label type; specifies the amount of memory in MiB to
request for the pod. If no limit is specified, this will
also be used as the limit.
.. attr:: storage
:type: int
Only used by the
:value:`providers.[kubernetes].pools.labels.type.pod` label type;
specifies the amount of ephemeral-storage in MB to request for the pod.
:value:`providers.[kubernetes].pools.labels.type.pod`
label type; specifies the amount of ephemeral-storage in
MB to request for the pod. If no limit is specified, this
will also be used as the limit.
.. attr:: cpu-limit
:type: int
Only used by the
:value:`providers.[kubernetes].pools.labels.type.pod`
label type; specifies the cpu limit for the pod.
.. attr:: memory-limit
:type: int
Only used by the
:value:`providers.[kubernetes].pools.labels.type.pod`
label type; specifies the memory limit in MiB for the pod.
.. attr:: storage-limit
:type: int
Only used by the
:value:`providers.[kubernetes].pools.labels.type.pod`
label type; specifies the ephemeral-storage limit in
MB for the pod.
.. attr:: env
:type: list

View File

@ -81,6 +81,48 @@ Selecting the openshift pods driver adds the following options to the
A dictionary of key-value pairs that will be stored with the node data
in ZooKeeper. The keys and values can be any arbitrary string.
.. attr:: default-label-cpu
:type: int
Specifies specifies a default value for
:attr:`providers.[openshiftpods].pools.labels.cpu` for all
labels of this pool that do not set their own value.
.. attr:: default-label-memory
:type: int
Specifies a default value in MiB for
:attr:`providers.[openshiftpods].pools.labels.memory` for all
labels of this pool that do not set their own value.
.. attr:: default-label-storage
:type: int
Specifies a default value in MB for
:attr:`providers.[openshiftpods].pools.labels.storage` for all
labels of this pool that do not set their own value.
.. attr:: default-label-cpu-limit
:type: int
Specifies specifies a default value for
:attr:`providers.[openshiftpods].pools.labels.cpu-limit` for all
labels of this pool that do not set their own value.
.. attr:: default-label-memory-limit
:type: int
Specifies a default value in MiB for
:attr:`providers.[openshiftpods].pools.labels.memory-limit` for
all labels of this pool that do not set their own value.
.. attr:: default-label-storage-limit
:type: int
Specifies a default value in MB for
:attr:`providers.[openshiftpods].pools.labels.storage-limit` for
all labels of this pool that do not set their own value.
.. attr:: labels
:type: list
@ -135,12 +177,37 @@ Selecting the openshift pods driver adds the following options to the
.. attr:: cpu
:type: int
The number of cpu to request for the pod.
Specifies the number of cpu to request for the pod. If no
limit is specified, this will also be used as the limit.
.. attr:: memory
:type: int
The amount of memory in MB to request for the pod.
Specifies the amount of memory in MiB to request for the
pod. If no limit is specified, this will also be used as
the limit.
.. attr:: storage
:type: int
Specifies the amount of ephemeral-storage in MB to request
for the pod. If no limit is specified, this will also be
used as the limit.
.. attr:: cpu-limit
:type: int
Specifies the cpu limit for the pod.
.. attr:: memory-limit
:type: int
Specifies the memory limit in MiB for the pod.
.. attr:: storage-limit
:type: int
Specifies the ephemeral-storage limit in MB for the pod.
.. attr:: python-path
:type: str

View File

@ -98,6 +98,60 @@ Selecting the openshift driver adds the following options to the
A dictionary of key-value pairs that will be stored with the node data
in ZooKeeper. The keys and values can be any arbitrary string.
.. attr:: default-label-cpu
:type: int
Only used by the
:value:`providers.[openshift].pools.labels.type.pod` label type;
specifies specifies a default value for
:attr:`providers.[openshift].pools.labels.cpu` for all labels of
this pool that do not set their own value.
.. attr:: default-label-memory
:type: int
Only used by the
:value:`providers.[openshift].pools.labels.type.pod` label type;
specifies a default value in MiB for
:attr:`providers.[openshift].pools.labels.memory` for all labels of
this pool that do not set their own value.
.. attr:: default-label-storage
:type: int
Only used by the
:value:`providers.[openshift].pools.labels.type.pod` label type;
specifies a default value in MB for
:attr:`providers.[openshift].pools.labels.storage` for all labels of
this pool that do not set their own value.
.. attr:: default-label-cpu-limit
:type: int
Only used by the
:value:`providers.[openshift].pools.labels.type.pod` label type;
specifies specifies a default value for
:attr:`providers.[openshift].pools.labels.cpu-limit` for all labels of
this pool that do not set their own value.
.. attr:: default-label-memory-limit
:type: int
Only used by the
:value:`providers.[openshift].pools.labels.type.pod` label type;
specifies a default value in MiB for
:attr:`providers.[openshift].pools.labels.memory-limit` for all labels of
this pool that do not set their own value.
.. attr:: default-label-storage-limit
:type: int
Only used by the
:value:`providers.[openshift].pools.labels.type.pod` label type;
specifies a default value in MB for
:attr:`providers.[openshift].pools.labels.storage-limit` for all labels of
this pool that do not set their own value.
.. attr:: labels
:type: list
@ -197,15 +251,50 @@ Selecting the openshift driver adds the following options to the
:type: int
Only used by the
:value:`providers.[openshift].pools.labels.type.pod` label type;
specifies the number of cpu to request for the pod.
:value:`providers.[openshift].pools.labels.type.pod`
label type; specifies the number of cpu to request for the
pod. If no limit is specified, this will also be used as
the limit.
.. attr:: memory
:type: int
Only used by the
:value:`providers.[openshift].pools.labels.type.pod` label type;
specifies the amount of memory in MiB to request for the pod.
:value:`providers.[openshift].pools.labels.type.pod`
label type; specifies the amount of memory in MiB to
request for the pod. If no limit is specified, this will
also be used as the limit.
.. attr:: storage
:type: int
Only used by the
:value:`providers.[openshift].pools.labels.type.pod`
label type; specifies the amount of ephemeral-storage in
MB to request for the pod. If no limit is specified, this
will also be used as the limit.
.. attr:: cpu-limit
:type: int
Only used by the
:value:`providers.[openshift].pools.labels.type.pod`
label type; specifies the cpu limit for the pod.
.. attr:: memory-limit
:type: int
Only used by the
:value:`providers.[openshift].pools.labels.type.pod`
label type; specifies the memory limit in MiB for the pod.
.. attr:: storage-limit
:type: int
Only used by the
:value:`providers.[openshift].pools.labels.type.pod`
label type; specifies the ephemeral-storage limit in
MB for the pod.
.. attr:: env
:type: list

View File

@ -54,6 +54,24 @@ class KubernetesPool(ConfigPool):
pl.cpu = label.get('cpu', self.default_label_cpu)
pl.memory = label.get('memory', self.default_label_memory)
pl.storage = label.get('storage', self.default_label_storage)
# The limits are the first of:
# 1) label specific configured limit
# 2) default label configured limit
# 3) label specific configured request
# 4) default label configured default
# 5) None
default_cpu_limit = pool_config.get(
'default-label-cpu-limit', pl.cpu)
default_memory_limit = pool_config.get(
'default-label-memory-limit', pl.memory)
default_storage_limit = pool_config.get(
'default-label-storage-limit', pl.storage)
pl.cpu_limit = label.get(
'cpu-limit', default_cpu_limit)
pl.memory_limit = label.get(
'memory-limit', default_memory_limit)
pl.storage_limit = label.get(
'storage-limit', default_storage_limit)
pl.env = label.get('env', [])
pl.node_selector = label.get('node-selector')
pl.privileged = label.get('privileged')
@ -105,6 +123,9 @@ class KubernetesProviderConfig(ProviderConfig):
'cpu': int,
'memory': int,
'storage': int,
'cpu-limit': int,
'memory-limit': int,
'storage-limit': int,
'env': [env_var],
'node-selector': dict,
'privileged': bool,
@ -123,6 +144,9 @@ class KubernetesProviderConfig(ProviderConfig):
v.Optional('default-label-cpu'): int,
v.Optional('default-label-memory'): int,
v.Optional('default-label-storage'): int,
v.Optional('default-label-cpu-limit'): int,
v.Optional('default-label-memory-limit'): int,
v.Optional('default-label-storage-limit'): int,
})
provider = {

View File

@ -318,17 +318,27 @@ class KubernetesProvider(Provider, QuotaSupport):
'env': label.env,
}
if label.cpu or label.memory:
container_body['resources'] = {}
for rtype in ('requests', 'limits'):
rbody = {}
if label.cpu:
rbody['cpu'] = int(label.cpu)
if label.memory:
rbody['memory'] = '%dMi' % int(label.memory)
if label.storage:
rbody['ephemeral-storage'] = '%dM' % int(label.storage)
container_body['resources'][rtype] = rbody
requests = {}
limits = {}
if label.cpu:
requests['cpu'] = int(label.cpu)
if label.memory:
requests['memory'] = '%dMi' % int(label.memory)
if label.storage:
requests['ephemeral-storage'] = '%dM' % int(label.storage)
if label.cpu_limit:
limits['cpu'] = int(label.cpu_limit)
if label.memory_limit:
limits['memory'] = '%dMi' % int(label.memory_limit)
if label.storage_limit:
limits['ephemeral-storage'] = '%dM' % int(label.storage_limit)
resources = {}
if requests:
resources['requests'] = requests
if limits:
resources['limits'] = limits
if resources:
container_body['resources'] = resources
spec_body = {
'containers': [container_body]
@ -410,6 +420,8 @@ class KubernetesProvider(Provider, QuotaSupport):
resources["cores"] = provider_label.cpu
if provider_label.memory:
resources["ram"] = provider_label.memory
if provider_label.storage:
resources["ephemeral-storage"] = provider_label.storage
return QuotaInformation(instances=1, default=1, **resources)
def unmanagedQuotaUsed(self):

View File

@ -38,6 +38,9 @@ class OpenshiftPool(ConfigPool):
def load(self, pool_config, full_config):
super().load(pool_config)
self.name = pool_config['name']
self.default_label_cpu = pool_config.get('default-label-cpu')
self.default_label_memory = pool_config.get('default-label-memory')
self.default_label_storage = pool_config.get('default-label-storage')
self.labels = {}
for label in pool_config.get('labels', []):
pl = OpenshiftLabel()
@ -46,8 +49,27 @@ class OpenshiftPool(ConfigPool):
pl.image = label.get('image')
pl.image_pull = label.get('image-pull', 'IfNotPresent')
pl.image_pull_secrets = label.get('image-pull-secrets', [])
pl.cpu = label.get('cpu')
pl.memory = label.get('memory')
pl.cpu = label.get('cpu', self.default_label_cpu)
pl.memory = label.get('memory', self.default_label_memory)
pl.storage = label.get('storage', self.default_label_storage)
# The limits are the first of:
# 1) label specific configured limit
# 2) default label configured limit
# 3) label specific configured request
# 4) default label configured default
# 5) None
default_cpu_limit = pool_config.get(
'default-label-cpu-limit', pl.cpu)
default_memory_limit = pool_config.get(
'default-label-memory-limit', pl.memory)
default_storage_limit = pool_config.get(
'default-label-storage-limit', pl.storage)
pl.cpu_limit = label.get(
'cpu-limit', default_cpu_limit)
pl.memory_limit = label.get(
'memory-limit', default_memory_limit)
pl.storage_limit = label.get(
'storage-limit', default_storage_limit)
pl.python_path = label.get('python-path', 'auto')
pl.shell_type = label.get('shell-type')
pl.env = label.get('env', [])
@ -100,6 +122,10 @@ class OpenshiftProviderConfig(ProviderConfig):
'image-pull-secrets': list,
'cpu': int,
'memory': int,
'storage': int,
'cpu-limit': int,
'memory-limit': int,
'storage-limit': int,
'python-path': str,
'shell-type': str,
'env': [env_var],
@ -115,6 +141,12 @@ class OpenshiftProviderConfig(ProviderConfig):
pool.update({
v.Required('name'): str,
v.Required('labels'): [openshift_label],
v.Optional('default-label-cpu'): int,
v.Optional('default-label-memory'): int,
v.Optional('default-label-storage'): int,
v.Optional('default-label-cpu-limit'): int,
v.Optional('default-label-memory-limit'): int,
v.Optional('default-label-storage-limit'): int,
})
schema = ProviderConfig.getCommonSchemaDict()

View File

@ -52,6 +52,9 @@ class OpenshiftLauncher(NodeLauncher):
self.node.shell_type = self.label.shell_type
# NOTE: resource access token may be encrypted here
self.node.connection_port = resource
pool = self.handler.provider.pools.get(self.node.pool)
self.node.resources = self.handler.manager.quotaNeededByLabel(
self.node.type[0], pool).get_resources()
self.node.cloud = self.provider_config.context
self.zk.storeNode(self.node)
self.log.info("Resource %s is ready", project)

View File

@ -233,15 +233,28 @@ class OpenshiftProvider(Provider, QuotaSupport):
'args': ["while true; do sleep 30; done;"],
'env': label.env,
}
if label.cpu or label.memory:
container_body['resources'] = {}
for rtype in ('requests', 'limits'):
rbody = {}
if label.cpu:
rbody['cpu'] = int(label.cpu)
if label.memory:
rbody['memory'] = '%dMi' % int(label.memory)
container_body['resources'][rtype] = rbody
requests = {}
limits = {}
if label.cpu:
requests['cpu'] = int(label.cpu)
if label.memory:
requests['memory'] = '%dMi' % int(label.memory)
if label.storage:
requests['ephemeral-storage'] = '%dM' % int(label.storage)
if label.cpu_limit:
limits['cpu'] = int(label.cpu_limit)
if label.memory_limit:
limits['memory'] = '%dMi' % int(label.memory_limit)
if label.storage_limit:
limits['ephemeral-storage'] = '%dM' % int(label.storage_limit)
resources = {}
if requests:
resources['requests'] = requests
if limits:
resources['limits'] = limits
if resources:
container_body['resources'] = resources
spec_body = {
'containers': [container_body],
@ -313,8 +326,15 @@ class OpenshiftProvider(Provider, QuotaSupport):
default=math.inf)
def quotaNeededByLabel(self, ntype, pool):
# TODO: return real quota information about a label
return QuotaInformation(cores=1, instances=1, ram=1, default=1)
provider_label = pool.labels[ntype]
resources = {}
if provider_label.cpu:
resources["cores"] = provider_label.cpu
if provider_label.memory:
resources["ram"] = provider_label.memory
if provider_label.storage:
resources["ephemeral-storage"] = provider_label.storage
return QuotaInformation(instances=1, default=1, **resources)
def unmanagedQuotaUsed(self):
# TODO: return real quota information about quota

View File

@ -56,6 +56,10 @@ class OpenshiftPodsProviderConfig(OpenshiftProviderConfig):
'image-pull-secrets': list,
'cpu': int,
'memory': int,
'storage': int,
'cpu-limit': int,
'memory-limit': int,
'storage-limit': int,
'python-path': str,
'shell-type': str,
'env': [env_var],
@ -71,6 +75,12 @@ class OpenshiftPodsProviderConfig(OpenshiftProviderConfig):
pool.update({
v.Required('name'): str,
v.Required('labels'): [openshift_label],
v.Optional('default-label-cpu'): int,
v.Optional('default-label-memory'): int,
v.Optional('default-label-storage'): int,
v.Optional('default-label-cpu-limit'): int,
v.Optional('default-label-memory-limit'): int,
v.Optional('default-label-storage-limit'): int,
})
schema = OpenshiftProviderConfig.getCommonSchemaDict()

View File

@ -12,6 +12,7 @@ labels:
- name: pod-default
- name: pod-custom-cpu
- name: pod-custom-mem
- name: pod-custom-storage
providers:
- name: kubespray
@ -21,12 +22,19 @@ providers:
- name: main
default-label-cpu: 2
default-label-memory: 1024
default-label-storage: 10
default-label-cpu-limit: 8
default-label-memory-limit: 4196
default-label-storage-limit: 40
labels:
- name: pod-default
type: pod
- name: pod-custom-cpu
type: pod
cpu: 4
cpu-limit: 4
- name: pod-custom-mem
type: pod
memory: 2048
memory-limit: 2048
- name: pod-custom-storage
type: pod
storage-limit: 20

View File

@ -0,0 +1,37 @@
zookeeper-servers:
- host: {zookeeper_host}
port: {zookeeper_port}
chroot: {zookeeper_chroot}
zookeeper-tls:
ca: {zookeeper_ca}
cert: {zookeeper_cert}
key: {zookeeper_key}
labels:
- name: pod-default
- name: pod-custom-cpu
- name: pod-custom-mem
- name: pod-custom-storage
providers:
- name: kubespray
driver: kubernetes
context: admin-cluster.local
pools:
- name: main
default-label-cpu: 2
default-label-memory: 1024
default-label-storage: 10
labels:
- name: pod-default
type: pod
- name: pod-custom-cpu
type: pod
cpu: 4
- name: pod-custom-mem
type: pod
memory: 2048
- name: pod-custom-storage
type: pod
storage: 20

View File

@ -0,0 +1,40 @@
zookeeper-servers:
- host: {zookeeper_host}
port: {zookeeper_port}
chroot: {zookeeper_chroot}
zookeeper-tls:
ca: {zookeeper_ca}
cert: {zookeeper_cert}
key: {zookeeper_key}
labels:
- name: pod-default
- name: pod-custom-cpu
- name: pod-custom-mem
- name: pod-custom-storage
providers:
- name: openshift
driver: openshift
context: admin-cluster.local
pools:
- name: main
default-label-cpu: 2
default-label-memory: 1024
default-label-storage: 10
default-label-cpu-limit: 8
default-label-memory-limit: 4196
default-label-storage-limit: 40
labels:
- name: pod-default
type: pod
- name: pod-custom-cpu
type: pod
cpu-limit: 4
- name: pod-custom-mem
type: pod
memory-limit: 2048
- name: pod-custom-storage
type: pod
storage-limit: 20

View File

@ -0,0 +1,37 @@
zookeeper-servers:
- host: {zookeeper_host}
port: {zookeeper_port}
chroot: {zookeeper_chroot}
zookeeper-tls:
ca: {zookeeper_ca}
cert: {zookeeper_cert}
key: {zookeeper_key}
labels:
- name: pod-default
- name: pod-custom-cpu
- name: pod-custom-mem
- name: pod-custom-storage
providers:
- name: openshift
driver: openshift
context: admin-cluster.local
pools:
- name: main
default-label-cpu: 2
default-label-memory: 1024
default-label-storage: 10
labels:
- name: pod-default
type: pod
- name: pod-custom-cpu
type: pod
cpu: 4
- name: pod-custom-mem
type: pod
memory: 2048
- name: pod-custom-storage
type: pod
storage: 20

View File

@ -249,7 +249,7 @@ class TestDriverKubernetes(tests.DBTestCase):
self.waitForNodeDeletion(node)
def test_kubernetes_default_label_resources(self):
configfile = self.setup_config('kubernetes-default-limits.yaml')
configfile = self.setup_config('kubernetes-default-resources.yaml')
pool = self.useNodepool(configfile, watermark_sleep=1)
pool.start()
@ -258,6 +258,7 @@ class TestDriverKubernetes(tests.DBTestCase):
req.node_types.append('pod-default')
req.node_types.append('pod-custom-cpu')
req.node_types.append('pod-custom-mem')
req.node_types.append('pod-custom-storage')
self.zk.storeNodeRequest(req)
self.log.debug("Waiting for request %s", req.id)
@ -268,32 +269,174 @@ class TestDriverKubernetes(tests.DBTestCase):
node_default = self.zk.getNode(req.nodes[0])
node_cust_cpu = self.zk.getNode(req.nodes[1])
node_cust_mem = self.zk.getNode(req.nodes[2])
node_cust_storage = self.zk.getNode(req.nodes[3])
resources_default = {
'instances': 1,
'cores': 2,
'ram': 1024,
'ephemeral-storage': 10,
}
resources_cust_cpu = {
'instances': 1,
'cores': 4,
'ram': 1024,
'ephemeral-storage': 10,
}
resources_cust_mem = {
'instances': 1,
'cores': 2,
'ram': 2048,
'ephemeral-storage': 10,
}
resources_cust_storage = {
'instances': 1,
'cores': 2,
'ram': 1024,
'ephemeral-storage': 20,
}
self.assertDictEqual(resources_default, node_default.resources)
self.assertDictEqual(resources_cust_cpu, node_cust_cpu.resources)
self.assertDictEqual(resources_cust_mem, node_cust_mem.resources)
self.assertDictEqual(resources_cust_storage,
node_cust_storage.resources)
ns, pod = self.fake_k8s_client._pod_requests[0]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
})
ns, pod = self.fake_k8s_client._pod_requests[1]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 4,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
'requests': {
'cpu': 4,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
})
ns, pod = self.fake_k8s_client._pod_requests[2]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '2048Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '2048Mi'
},
})
ns, pod = self.fake_k8s_client._pod_requests[3]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 2,
'ephemeral-storage': '20M',
'memory': '1024Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '20M',
'memory': '1024Mi'
},
})
for node in (node_default, node_cust_cpu, node_cust_mem):
node.state = zk.DELETING
self.zk.storeNode(node)
self.waitForNodeDeletion(node)
def test_kubernetes_default_label_limits(self):
configfile = self.setup_config('kubernetes-default-limits.yaml')
pool = self.useNodepool(configfile, watermark_sleep=1)
pool.start()
req = zk.NodeRequest()
req.state = zk.REQUESTED
req.node_types.append('pod-default')
req.node_types.append('pod-custom-cpu')
req.node_types.append('pod-custom-mem')
req.node_types.append('pod-custom-storage')
self.zk.storeNodeRequest(req)
self.log.debug("Waiting for request %s", req.id)
req = self.waitForNodeRequest(req)
self.assertEqual(req.state, zk.FULFILLED)
self.assertNotEqual(req.nodes, [])
ns, pod = self.fake_k8s_client._pod_requests[0]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 8,
'ephemeral-storage': '40M',
'memory': '4196Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
})
ns, pod = self.fake_k8s_client._pod_requests[1]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 4,
'ephemeral-storage': '40M',
'memory': '4196Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
})
ns, pod = self.fake_k8s_client._pod_requests[2]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 8,
'ephemeral-storage': '40M',
'memory': '2048Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
})
ns, pod = self.fake_k8s_client._pod_requests[3]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 8,
'ephemeral-storage': '20M',
'memory': '4196Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
})
def test_kubernetes_pool_quota_servers(self):
self._test_kubernetes_quota('kubernetes-pool-quota-servers.yaml')

View File

@ -260,6 +260,195 @@ class TestDriverOpenshift(tests.DBTestCase):
self.waitForNodeDeletion(node)
def test_openshift_default_label_resources(self):
configfile = self.setup_config('openshift-default-resources.yaml')
pool = self.useNodepool(configfile, watermark_sleep=1)
pool.start()
req = zk.NodeRequest()
req.state = zk.REQUESTED
req.node_types.append('pod-default')
req.node_types.append('pod-custom-cpu')
req.node_types.append('pod-custom-mem')
req.node_types.append('pod-custom-storage')
self.zk.storeNodeRequest(req)
self.log.debug("Waiting for request %s", req.id)
req = self.waitForNodeRequest(req)
self.assertEqual(req.state, zk.FULFILLED)
self.assertNotEqual(req.nodes, [])
node_default = self.zk.getNode(req.nodes[0])
node_cust_cpu = self.zk.getNode(req.nodes[1])
node_cust_mem = self.zk.getNode(req.nodes[2])
node_cust_storage = self.zk.getNode(req.nodes[3])
resources_default = {
'instances': 1,
'cores': 2,
'ram': 1024,
'ephemeral-storage': 10,
}
resources_cust_cpu = {
'instances': 1,
'cores': 4,
'ram': 1024,
'ephemeral-storage': 10,
}
resources_cust_mem = {
'instances': 1,
'cores': 2,
'ram': 2048,
'ephemeral-storage': 10,
}
resources_cust_storage = {
'instances': 1,
'cores': 2,
'ram': 1024,
'ephemeral-storage': 20,
}
self.assertDictEqual(resources_default, node_default.resources)
self.assertDictEqual(resources_cust_cpu, node_cust_cpu.resources)
self.assertDictEqual(resources_cust_mem, node_cust_mem.resources)
self.assertDictEqual(resources_cust_storage,
node_cust_storage.resources)
ns, pod = self.fake_k8s_client._pod_requests[0]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
})
ns, pod = self.fake_k8s_client._pod_requests[1]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 4,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
'requests': {
'cpu': 4,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
})
ns, pod = self.fake_k8s_client._pod_requests[2]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '2048Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '2048Mi'
},
})
ns, pod = self.fake_k8s_client._pod_requests[3]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 2,
'ephemeral-storage': '20M',
'memory': '1024Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '20M',
'memory': '1024Mi'
},
})
for node in (node_default, node_cust_cpu, node_cust_mem):
node.state = zk.DELETING
self.zk.storeNode(node)
self.waitForNodeDeletion(node)
def test_openshift_default_label_limits(self):
configfile = self.setup_config('openshift-default-limits.yaml')
pool = self.useNodepool(configfile, watermark_sleep=1)
pool.start()
req = zk.NodeRequest()
req.state = zk.REQUESTED
req.node_types.append('pod-default')
req.node_types.append('pod-custom-cpu')
req.node_types.append('pod-custom-mem')
req.node_types.append('pod-custom-storage')
self.zk.storeNodeRequest(req)
self.log.debug("Waiting for request %s", req.id)
req = self.waitForNodeRequest(req)
self.assertEqual(req.state, zk.FULFILLED)
self.assertNotEqual(req.nodes, [])
ns, pod = self.fake_k8s_client._pod_requests[0]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 8,
'ephemeral-storage': '40M',
'memory': '4196Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
})
ns, pod = self.fake_k8s_client._pod_requests[1]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 4,
'ephemeral-storage': '40M',
'memory': '4196Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
})
ns, pod = self.fake_k8s_client._pod_requests[2]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 8,
'ephemeral-storage': '40M',
'memory': '2048Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
})
ns, pod = self.fake_k8s_client._pod_requests[3]
self.assertEqual(pod['spec']['containers'][0]['resources'], {
'limits': {
'cpu': 8,
'ephemeral-storage': '20M',
'memory': '4196Mi'
},
'requests': {
'cpu': 2,
'ephemeral-storage': '10M',
'memory': '1024Mi'
},
})
def test_openshift_pull_secret(self):
configfile = self.setup_config('openshift.yaml')
pool = self.useNodepool(configfile, watermark_sleep=1)

View File

@ -0,0 +1,5 @@
---
features:
- |
Added support for specifying Kubernetes and OpenShift pod resource
limits separately from requests.