Merge "Kube-proxy performance test plan and report"

This commit is contained in:
Jenkins 2017-04-11 15:30:06 +00:00 committed by Gerrit Code Review
commit a91c847c63
12 changed files with 1712 additions and 0 deletions

View File

@ -0,0 +1,15 @@
---
apiVersion: v1
kind: Service
metadata:
name: {{ name }}
namespace: minions
labels:
k8s-app: minion
spec:
selector:
k8s-app: minion
ports:
- name: {{ name }}
port: 80
protocol: TCP

View File

@ -0,0 +1,58 @@
import os
import random
import shlex
import string
import subprocess
import jinja2
SERVICES = 100 # amount of services, > len(NODES)
REPLICAS = 1
NODES = ['node2', 'node3', 'node4', 'node5', 'node6']
def render(tpl_path, context):
path, filename = os.path.split(tpl_path)
return jinja2.Environment(
loader=jinja2.FileSystemLoader(path or './')
).get_template(filename).render(context)
def id_generator(size=8, chars=string.ascii_lowercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))
def create_svc(node='node1'):
service_name = "minion-{}".format(id_generator())
file_name = "{}.yaml".format(service_name)
# Create YAML file for new service
template = render("service.yaml", {"name": service_name})
f = open(file_name, "w")
f.write(template)
f.close()
cmd = "kubectl -n minions create -f {}".format(file_name)
args = shlex.split(cmd)
proc = subprocess.Popen(args, stdout=subprocess.PIPE)
(out, err) = proc.communicate()
# Delete YAML file
os.remove(file_name)
if not err:
return True
print err
return False
def main():
service_per_node = int(SERVICES / len(NODES))
for node in NODES:
for i in range(0, service_per_node):
success = create_svc(node=node)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,25 @@
from locust import HttpLocust, TaskSet
urls = []
f = open('/root/ayasakov/hosts.txt', 'r')
for line in f:
if 'http' in line:
urls.append('{}'.format(line[:-1]))
print urls
def getting_page(l):
for url in urls:
l.client.get(url, name=url)
class UserBehavior(TaskSet):
tasks = {getting_page: 1}
class WebsiteUser(HttpLocust):
task_set = UserBehavior
min_wait = 5000
max_wait = 9000

View File

@ -0,0 +1,20 @@
---
apiVersion: v1
kind: ReplicationController
metadata:
name: minion
namespace: minions
spec:
replicas: 1
selector:
k8s-app: minion
ersion: v0
template:
metadata:
labels:
k8s-app: minion
ersion: v0
spec:
containers:
- name: minion
image: 172.20.9.32:5000/qa/minion:latest

View File

@ -0,0 +1,2 @@
FROM nginx:latest
COPY static /usr/share/nginx/html

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,101 @@
.. _Kubernetes_proxy_performance_test_plan:
**************************************
Kubernetes proxy performance test plan
**************************************
:status: **ready**
:version: 1.0
:Abstract:
This test plan covers scenarios for Kubernetes proxy performance testing.
Test Plan
=========
Kube-proxy(starting with k8s version 1.4) by default works in 'iptables' mode
and does not proxy the traffic. Old 'userspace' mode is left for backward
compatibility only. There is opinion, that even most recent version of
kube-proxy is not as effective, as it can be due ip-tables mode has its own
disadvantages and possible lack of performance. We want to check it.
Test Environment
----------------
Preparation
^^^^^^^^^^^
The test plan is executed against Kubernetes deployed on bare-metal nodes.
Environment description
^^^^^^^^^^^^^^^^^^^^^^^
The environment description includes hardware specification of servers,
network parameters, operation system and OpenStack deployment characteristics.
Test Case #1: Performing kube-proxy
-----------------------------------
Description
^^^^^^^^^^^
In this test case we investigate how number of services affects Kubernetes
proxy performance.
Script :download:`code/kubeproxy/test_kubeproxy.py` will create Kubernetes
services based on file :download:`code/kubeproxy/service.yaml`. After that,
will make request to this services using Locust_ (
:download:`code/locustfile.py`). Results will show response time.
Parameters
^^^^^^^^^^
**Case group 1:**
.. table:
+----------------------+------------------------+
| Parameter name | Value |
+======================+========================+
| number of Services | 100, 200, ..., 1400 |
+----------------------+------------------------+
**Case group 2:**
.. table:
+----------------------+------------------------+
| Parameter name | Value |
+======================+========================+
| number of Services | 10, 50 |
+----------------------+------------------------+
| number of Pods | 1, 3, 5 |
+----------------------+------------------------+
List of performance metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. table:: list of test metrics to be collected during this test
+-------------------------+-----------------------------------------------+
| Parameter | Description |
+=========================+===============================================+
| MIN_RESPONSE | time in ms |
+-------------------------+-----------------------------------------------+
| MAX_RESPONSE | time in ms |
+-------------------------+-----------------------------------------------+
| AVERAGE_RESPONSE | time in ms |
+-------------------------+-----------------------------------------------+
Reports
=======
Test plan execution reports:
* :ref:`Kubernetes_proxy_performance_test_report`
.. references:
.. _Locust: http://locust.io/

View File

@ -0,0 +1,183 @@
.. _Kubernetes_proxy_performance_test_report:
****************************************
Kubernetes proxy performance test report
****************************************
:Abstract:
This document is the report for :ref:`Kubernetes_proxy_performance_test_plan`
Environment description
=======================
This report is collected on the hardware described in
:ref:`intel_mirantis_performance_lab_1`.
Software
~~~~~~~~
Kubernetes is installed using :ref:`Kargo` deployment tool on Ubuntu 16.04.1.
Node roles:
- node1: minion+master+etcd
- node2: minion+master+etcd
- node3: minion+etcd
- node4: minion
- node5: minion
- node6: minion
Software versions:
- OS: Ubuntu 16.04.1 LTS (Xenial Xerus)
- Kernel: 4.4.0-47-generic
- Docker: 1.13.0
- Kubernetes: v1.5.3+coreos.0
Reports
=======
Test Case #1: Performing kube-proxy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Launched script that adds 100 services
Detailed Stats
--------------
Case group 1
^^^^^^^^^^^^
Note: You can download these reports in csv format
:download:`here <reports.tar.gz>`
.. image:: kubeproxy.png
.. list-table:: Response time stats
:header-rows: 1
*
- SERVICE_COUNT
- MIN_RESPONCE, ms
- AVERAGE_RESPONSE, ms
- MAX_RESPONSE, ms
*
- 100
- 12
- 821
- 2854
*
- 200
- 717
- 1843
- 4599
*
- 300
- 1173
- 2859
- 7773
*
- 400
- 1132
- 3898
- 9939
*
- 500
- 1483
- 4794
- 10567
*
- 600
- 2077
- 6139
- 13680
*
- 700
- 3280
- 7246
- 20293
*
- 800
- 3853
- 8268
- 19396
*
- 900
- 5216
- 9357
- 21877
*
- 1000
- 3056
- 10844
- 23374
*
- 1200
- 4339
- 13327
- 27060
*
- 1400
- 7168
- 16072
- 34114
Case group 2
^^^^^^^^^^^^
Note: The dependence of the time response from number of pods.
.. image:: s10.png
.. list-table:: for 10 services
:header-rows: 1
*
- POD_COUNT
- MIN_RESPONCE, ms
- AVERAGE_RESPONSE, ms
- MAX_RESPONSE, ms
*
- 1
- 1
- 16
- 1704
*
- 3
- 1
- 5
- 434
*
- 5
- 1
- 4
- 200
.. image:: s50.png
.. list-table:: for 50 services
:header-rows: 1
*
- POD_COUNT
- MIN_RESPONCE, ms
- AVERAGE_RESPONSE, ms
- MAX_RESPONSE, ms
*
- 1
- 2
- 818
- 2818
*
- 3
- 4
- 317
- 1980
*
- 5
- 3
- 321
- 1634

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB