diff --git a/doc/source/test_results/ceph_testing/configs/Network_Scheme.png b/doc/source/test_results/ceph_testing/configs/Network_Scheme.png new file mode 100644 index 0000000..5c800df Binary files /dev/null and b/doc/source/test_results/ceph_testing/configs/Network_Scheme.png differ diff --git a/doc/source/test_results/ceph_testing/configs/Report.html b/doc/source/test_results/ceph_testing/configs/Report.html new file mode 100644 index 0000000..c637ab6 --- /dev/null +++ b/doc/source/test_results/ceph_testing/configs/Report.html @@ -0,0 +1,6595 @@ + + + + Report + + + + + + + + +
+ +
+
+
+ +

Summary

+ + + + + + +
Compute countcomputesOSD countOSD countTotal Ceph disks countOSD_hdd_count
+ +
+

Random direct performance,
4KiB blocks

+ + + + + + + + + + + + + +
OperationIOPS +- conf% ~ dev%
Read
9980.0 +- 0 ~ 1
Write
296.0 +- 1 ~ 6
+
           +

Random direct performance,
16MiB blocks

+ + + + + + + + + + + + + +
OperationBW MiBps +- conf% ~ dev%
Read
518.0 +- 1 ~ 6
Write
2001.0 +- 0 ~ 3
+
           +

Maximal sync random write IOPS
for given latency, 4KiB

+ + + + + + + + + + + + + + + + + +
Latency msIOPS
10
196
30
>=1850
100
>=1850
+
+
+

+ + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+
+ + + + + \ No newline at end of file diff --git a/doc/source/test_results/ceph_testing/configs/ceph-osd-1.tar.gz b/doc/source/test_results/ceph_testing/configs/ceph-osd-1.tar.gz new file mode 100644 index 0000000..94f2d8b Binary files /dev/null and b/doc/source/test_results/ceph_testing/configs/ceph-osd-1.tar.gz differ diff --git a/doc/source/test_results/ceph_testing/configs/ceph_config.yaml b/doc/source/test_results/ceph_testing/configs/ceph_config.yaml new file mode 100644 index 0000000..9c6f447 --- /dev/null +++ b/doc/source/test_results/ceph_testing/configs/ceph_config.yaml @@ -0,0 +1,1074 @@ +{ + "global_vars": { + "ceph_facts_template": "/usr/local/lib/python3.5/dist-packages/decapod_common/facts/ceph_facts_module.py.j2", + "ceph_stable": true, + "ceph_stable_distro_source": "jewel-xenial", + "ceph_stable_release": "jewel", + "ceph_stable_release_uca": "jewel-xenial", + "ceph_stable_repo": "http://mirror.fuel-infra.org/decapod/ceph/jewel-xenial", + "ceph_stable_repo_key": "AF94F6A6A254F5F0", + "ceph_stable_repo_keyserver": "hkp://keyserver.ubuntu.com:80", + "cluster": "ceph", + "cluster_network": "10.3.56.0/21", + "copy_admin_key": true, + "dmcrypt_dedicated_journal": false, + "dmcrypt_journal_collocation": false, + "fsid": "04c1462d-0333-4302-b51e-7aa14dfefb92", + "journal_collocation": false, + "journal_size": 80000, + "max_open_files": 131072, + "nfs_file_gw": false, + "nfs_obj_gw": false, + "os_tuning_params": [ + { + "name": "kernel.pid_max", + "value": 4194303 + }, + { + "name": "fs.file-max", + "value": 26234859 + } + ], + "public_network": "10.3.56.0/21", + "raw_multi_journal": true, + "restapi_template_local_path": "/usr/local/lib/python3.5/dist-packages/decapod_plugin_playbook_deploy_cluster/ceph-rest-api.service" + }, + "inventory": { + "_meta": { + "hostvars": { + "10.3.56.200": { + "ansible_user": "ansible", + "devices": [], + "monitor_interface": "bond0.130", + "raw_journal_devices": [] + }, + "10.3.56.48": { + "ansible_user": "ansible", + "devices": [], + "monitor_interface": "bond0.130", + "raw_journal_devices": [] + }, + "10.3.56.51": { + "ansible_user": "ansible", + "devices": [], + "monitor_interface": "bond0.130", + "raw_journal_devices": [] + }, + "10.3.61.80": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.100": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdb", + "/dev/sdh", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4", + "/dev/sdi1", + "/dev/sdi2", + "/dev/sdi3", + "/dev/sdi4" + ] + }, + "10.3.62.101": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.102": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.103": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.104": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdb", + "/dev/sdh", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4", + "/dev/sdi1", + "/dev/sdi2", + "/dev/sdi3", + "/dev/sdi4" + ] + }, + "10.3.62.105": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdb", + "/dev/sdh", + "/dev/sdi" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4", + "/dev/sdj1", + "/dev/sdj2", + "/dev/sdj3", + "/dev/sdj4" + ] + }, + "10.3.62.65": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4", + "/dev/sdj1", + "/dev/sdj2", + "/dev/sdj3", + "/dev/sdj4" + ] + }, + "10.3.62.66": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.67": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.68": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.69": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdb", + "/dev/sdh", + "/dev/sdi" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4", + "/dev/sdj1", + "/dev/sdj2", + "/dev/sdj3", + "/dev/sdj4" + ] + }, + "10.3.62.70": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.73": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdb", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sde1", + "/dev/sde2", + "/dev/sde3", + "/dev/sde4", + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4" + ] + }, + "10.3.62.74": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.75": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.76": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.77": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdg", + "/dev/sda", + "/dev/sdb", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdf1", + "/dev/sdf2", + "/dev/sdf3", + "/dev/sdf4", + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4" + ] + }, + "10.3.62.78": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.79": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdc", + "/dev/sdh", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4", + "/dev/sdi1", + "/dev/sdi2", + "/dev/sdi3", + "/dev/sdi4" + ] + }, + "10.3.62.80": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.81": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.82": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdb", + "/dev/sdh", + "/dev/sdi" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4", + "/dev/sdj1", + "/dev/sdj2", + "/dev/sdj3", + "/dev/sdj4" + ] + }, + "10.3.62.83": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdb", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4", + "/dev/sdh1", + "/dev/sdh2", + "/dev/sdh3", + "/dev/sdh4" + ] + }, + "10.3.62.84": { + "ansible_user": "ansible", + "devices": [ + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdb", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdd1", + "/dev/sdd2", + "/dev/sdd3", + "/dev/sdd4", + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4" + ] + }, + "10.3.62.85": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.86": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sda", + "/dev/sdb", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdg1", + "/dev/sdg2", + "/dev/sdg3", + "/dev/sdg4", + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4" + ] + }, + "10.3.62.87": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.88": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.89": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sda", + "/dev/sdb", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdg1", + "/dev/sdg2", + "/dev/sdg3", + "/dev/sdg4", + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4" + ] + }, + "10.3.62.90": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdb", + "/dev/sdh", + "/dev/sdi" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4", + "/dev/sdj1", + "/dev/sdj2", + "/dev/sdj3", + "/dev/sdj4" + ] + }, + "10.3.62.91": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.92": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdb", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4", + "/dev/sdh1", + "/dev/sdh2", + "/dev/sdh3", + "/dev/sdh4" + ] + }, + "10.3.62.93": { + "ansible_user": "ansible", + "devices": [ + "/dev/sde", + "/dev/sdf", + "/dev/sda", + "/dev/sdb", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdd1", + "/dev/sdd2", + "/dev/sdd3", + "/dev/sdd4", + "/dev/sdg1", + "/dev/sdg2", + "/dev/sdg3", + "/dev/sdg4" + ] + }, + "10.3.62.94": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.95": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.96": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.97": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + }, + "10.3.62.98": { + "ansible_user": "ansible", + "devices": [ + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sda", + "/dev/sdb", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sdd1", + "/dev/sdd2", + "/dev/sdd3", + "/dev/sdd4", + "/dev/sdc1", + "/dev/sdc2", + "/dev/sdc3", + "/dev/sdc4" + ] + }, + "10.3.62.99": { + "ansible_user": "ansible", + "devices": [ + "/dev/sdd", + "/dev/sde", + "/dev/sdf", + "/dev/sdg", + "/dev/sdc", + "/dev/sdh", + "/dev/sdi", + "/dev/sdj" + ], + "monitor_interface": "bond0.130", + "raw_journal_devices": [ + "/dev/sda1", + "/dev/sda2", + "/dev/sda3", + "/dev/sda4", + "/dev/sdb1", + "/dev/sdb2", + "/dev/sdb3", + "/dev/sdb4" + ] + } + } + }, + "clients": [], + "iscsi_gw": [], + "mdss": [], + "mons": [ + "10.3.56.200", + "10.3.56.48", + "10.3.56.51" + ], + "nfss": [], + "osds": [ + "10.3.62.67", + "10.3.62.73", + "10.3.62.76", + "10.3.62.96", + "10.3.62.94", + "10.3.62.78", + "10.3.62.92", + "10.3.62.89", + "10.3.62.102", + "10.3.62.101", + "10.3.62.98", + "10.3.62.79", + "10.3.62.77", + "10.3.62.69", + "10.3.62.66", + "10.3.62.68", + "10.3.62.83", + "10.3.62.85", + "10.3.62.88", + "10.3.62.90", + "10.3.62.99", + "10.3.62.82", + "10.3.62.70", + "10.3.62.74", + "10.3.62.75", + "10.3.62.97", + "10.3.61.80", + "10.3.62.105", + "10.3.62.86", + "10.3.62.65", + "10.3.62.91", + "10.3.62.84", + "10.3.62.104", + "10.3.62.100", + "10.3.62.93", + "10.3.62.80", + "10.3.62.81", + "10.3.62.87", + "10.3.62.103", + "10.3.62.95" + ], + "rbdmirrors": [], + "restapis": [], + "rgws": [] + } +} \ No newline at end of file diff --git a/doc/source/test_results/ceph_testing/configs/ceph_raw.yaml b/doc/source/test_results/ceph_testing/configs/ceph_raw.yaml new file mode 100644 index 0000000..2a5c346 --- /dev/null +++ b/doc/source/test_results/ceph_testing/configs/ceph_raw.yaml @@ -0,0 +1,17 @@ +include: default.yaml + +clouds: + ceph: ssh://root@osscr01r09c32::/root/.ssh/id_rsa + +discover: ceph + +explicit_nodes: + "ssh://root@localhost::/root/.ssh/id_rsa": testnode + + +tests: + - io: + cfg: ceph + params: + FILENAME: /dev/vdb + TEST_FILE_SIZE: 30 diff --git a/doc/source/test_results/ceph_testing/configs/result-1.png b/doc/source/test_results/ceph_testing/configs/result-1.png new file mode 100644 index 0000000..a31d4f0 Binary files /dev/null and b/doc/source/test_results/ceph_testing/configs/result-1.png differ diff --git a/doc/source/test_results/ceph_testing/configs/result-2.png b/doc/source/test_results/ceph_testing/configs/result-2.png new file mode 100644 index 0000000..e27a96d Binary files /dev/null and b/doc/source/test_results/ceph_testing/configs/result-2.png differ diff --git a/doc/source/test_results/ceph_testing/configs/result-3.png b/doc/source/test_results/ceph_testing/configs/result-3.png new file mode 100644 index 0000000..c457d1c Binary files /dev/null and b/doc/source/test_results/ceph_testing/configs/result-3.png differ diff --git a/doc/source/test_results/ceph_testing/configs/result-4.png b/doc/source/test_results/ceph_testing/configs/result-4.png new file mode 100644 index 0000000..589eca1 Binary files /dev/null and b/doc/source/test_results/ceph_testing/configs/result-4.png differ diff --git a/doc/source/test_results/ceph_testing/index.rst b/doc/source/test_results/ceph_testing/index.rst new file mode 100644 index 0000000..678c1da --- /dev/null +++ b/doc/source/test_results/ceph_testing/index.rst @@ -0,0 +1,172 @@ + +.. _ceph_rbd_performance_results_50_osd: + +*************************** +Ceph RBD performance report +*************************** + +:Abstract: + + This document includes Ceph RBD performance test results for 40 OSD nodes. + Test cluster contain 40 OSD servers and forms 581TiB ceph cluster. + +Environment description +======================= + +Environment contains 3 types of servers: + +- ceph-mon node +- ceph-osd node +- compute node + +.. table:: Amount of servers each role + + +------------+--------------+------+ + |Role |Servers count |Type | + +============+==============+======+ + |compute |1 |1 | + +------------+--------------+------+ + |ceph-mon |3 |1 | + +------------+--------------+------+ + |ceph-osd |40 |2 | + +------------+--------------+------+ + +Hardware configuration of each server +------------------------------------- + +All servers have 2 types of configuration describing in table below + +.. table:: Description of servers hardware type 1 + + +-------+----------------+---------------------------------+ + |server |vendor,model |Dell PowerEdge R630 | + +-------+----------------+---------------------------------+ + |CPU |vendor,model |Intel,E5-2680 v3 | + | +----------------+---------------------------------+ + | |processor_count |2 | + | +----------------+---------------------------------+ + | |core_count |12 | + | +----------------+---------------------------------+ + | |frequency_MHz |2500 | + +-------+----------------+---------------------------------+ + |RAM |vendor,model |Samsung, M393A2G40DB0-CPB | + | +----------------+---------------------------------+ + | |amount_MB |262144 | + +-------+----------------+---------------------------------+ + |NETWORK|interface_name s|eno1, eno2 | + | +----------------+---------------------------------+ + | |vendor,model |Intel,X710 Dual Port | + | +----------------+---------------------------------+ + | |bandwidth |10G | + | +----------------+---------------------------------+ + | |interface_names |enp3s0f0, enp3s0f1 | + | +----------------+---------------------------------+ + | |vendor,model |Intel,X710 Dual Port | + | +----------------+---------------------------------+ + | |bandwidth |10G | + +-------+----------------+---------------------------------+ + |STORAGE|dev_name |/dev/sda | + | +----------------+---------------------------------+ + | |vendor,model | | raid1 - Dell, PERC H730P Mini | + | | | | 2 disks Intel S3610 | + | +----------------+---------------------------------+ + | |SSD/HDD |SSD | + | +----------------+---------------------------------+ + | |size | 3,6TB | + +-------+----------------+---------------------------------+ + +.. table:: Description of servers hardware type 2 + + +-------+----------------+-------------------------------+ + |server |vendor,model |Lenovo ThinkServer RD650 | + +-------+----------------+-------------------------------+ + |CPU |vendor,model |Intel,E5-2670 v3 | + | +----------------+-------------------------------+ + | |processor_count |2 | + | +----------------+-------------------------------+ + | |core_count |12 | + | +----------------+-------------------------------+ + | |frequency_MHz |2500 | + +-------+----------------+-------------------------------+ + |RAM |vendor,model |Samsung, M393A2G40DB0-CPB | + | +----------------+-------------------------------+ + | |amount_MB |131916 | + +-------+----------------+-------------------------------+ + |NETWORK|interface_names |enp3s0f0, enp3s0f1 | + | +----------------+-------------------------------+ + | |vendor,model |Intel,X710 Dual Port | + | +----------------+-------------------------------+ + | |bandwidth |10G | + | +----------------+-------------------------------+ + | |interface_names |ens2f0, ens2f1 | + | +----------------+-------------------------------+ + | |vendor,model |Intel,X710 Dual Port | + | +----------------+-------------------------------+ + | |bandwidth |10G | + +-------+----------------+-------------------------------+ + |STORAGE|vendor,model |2 disks Intel S3610 | + | +----------------+-------------------------------+ + | |SSD/HDD |SSD | + | +----------------+-------------------------------+ + | |size |799GB | + | +----------------+-------------------------------+ + | |vendor,model |10 disks 2T | + | +----------------+-------------------------------+ + | |SSD/HDD |HDD | + | +----------------+-------------------------------+ + | |size |2TB | + +-------+----------------+-------------------------------+ + +Network configuration of each server +------------------------------------ + +All servers have same network configuration: + +.. image:: configs/Network_Scheme.png + :alt: Network Scheme of the environment + +Software configuration on servers with controller, compute and compute-osd roles +-------------------------------------------------------------------------------- + +Ceph was deployed by Decapod tool. Cluster config for decapod: +:download:`ceph_config.yaml ` + +.. table:: Software version on servers + + +------------+-------------------+ + |Software |Version | + +============+===================+ + |Ceph |Jewel | + +------------+-------------------+ + |Ubuntu |Ubuntu 16.04 LTS | + +------------+-------------------+ + +You can find outputs of some commands and /etc folder in the following archives: + +| :download:`ceph-osd-1.tar.gz ` + +Testing process +=============== + +1. Run virtual machine on compute node with attached RBD disk. +2. SSH into VM operation system +3. Clone Wally repository. +4. Create :download:`ceph_raw.yaml ` file in cloned + repository +5. Run command python -m wally test ceph_rbd_2 ./ceph_raw.yaml + +As a result we got the following HTML file: + +:download:`Report.html ` + +Test results +============ + +.. image:: configs/result-1.png + +.. image:: configs/result-2.png + +.. image:: configs/result-3.png + +.. image:: configs/result-4.png +