Lower Ceph PGs count in scenario004

Each OSD can only host maximum 200 PGs, in scenario004 we create 9
pools to enable MDS/Manila and RGW so we need to lower the PGs
count further, compared to scenario001.

Also lowers the values in low-memory-usage.yaml environment file.

Change-Id: If95a0e3fe5aeef61f9712d8006e0f49c11a0c90f
Closes-Bug: 1781910
This commit is contained in:
Giulio Fidente 2018-07-16 13:41:10 +02:00
parent aaafe82145
commit d348ebc34e
2 changed files with 8 additions and 2 deletions

View File

@ -101,7 +101,11 @@ parameter_defaults:
journal_size: 512
journal_collocation: true
osd_scenario: collocated
CephPoolDefaultPgNum: 32
# Without MDS and RGW we create 5 pools, totalling 160 PGs at 32 PGs each
# With MDS and RGW instead we create 9 pools, so we lower the PG size
CephPoolDefaultPgNum: 16
ManilaCephFSDataPoolPGNum: 16
ManilaCephFSMetadataPoolPGNum: 16
CephPoolDefaultSize: 1
CephAnsibleExtraConfig:
centos_package_dependencies: []

View File

@ -20,6 +20,8 @@ parameter_defaults:
# Override defaults to get HEALTH_OK with 1 OSD (for testing only)
CephPoolDefaultSize: 1
CephPoolDefaultPgNum: 32
CephPoolDefaultPgNum: 16
ManilaCephFSDataPoolPGNum: 16
ManilaCephFSMetadataPoolPGNum: 16
NovaReservedHostMemory: 512