Switch scenario00{1,4}-multinode-containers to Ceph bluestore

Modify scenario00{1,4}-multinode to use Ceph's bluestore in place
filestore. Bluestore is the default deployment method as of version
3.2 of ceph-ansible so we should test it in CI. Use pre-created
lvm_volumes parameter to avoid issue with 'ceph-volume batch'
mode which does not work on loopback devices.

blueprint: bluestore
Depends-On: I747ac3dca5afdc91538da40b9ed45591ac8d1662
Fixes-Bug: #1817688
(cherry-picked from commit e3f697df6e)

Change-Id: Id2658ae814b580971d559af616b8ba034dff681b
This commit is contained in:
John Fulton 2018-02-23 17:10:08 -05:00
parent 4cc9f7479f
commit a89cd6b19b
2 changed files with 18 additions and 9 deletions

View File

@ -112,10 +112,15 @@ parameter_defaults:
Debug: true Debug: true
DockerPuppetDebug: True DockerPuppetDebug: True
CephAnsibleDisksConfig: CephAnsibleDisksConfig:
devices: osd_objectstore: bluestore
- /dev/loop3 osd_scenario: lvm
journal_size: 512 lvm_volumes:
osd_scenario: collocated - data: ceph_lv_data
data_vg: ceph_vg
db: ceph_lv_db
db_vg: ceph_vg
wal: ceph_lv_wal
wal_vg: ceph_vg
CephPoolDefaultPgNum: 32 CephPoolDefaultPgNum: 32
CephPoolDefaultSize: 1 CephPoolDefaultSize: 1
CephPools: CephPools:

View File

@ -93,11 +93,15 @@ parameter_defaults:
Debug: true Debug: true
DockerPuppetDebug: True DockerPuppetDebug: True
CephAnsibleDisksConfig: CephAnsibleDisksConfig:
devices: osd_objectstore: bluestore
- /dev/loop3 osd_scenario: lvm
journal_size: 512 lvm_volumes:
journal_collocation: true - data: ceph_lv_data
osd_scenario: collocated data_vg: ceph_vg
db: ceph_lv_db
db_vg: ceph_vg
wal: ceph_lv_wal
wal_vg: ceph_vg
# Without MDS and RGW we create 5 pools, totalling 160 PGs at 32 PGs each # Without MDS and RGW we create 5 pools, totalling 160 PGs at 32 PGs each
# With MDS and RGW instead we create 9 pools, so we lower the PG size # With MDS and RGW instead we create 9 pools, so we lower the PG size
CephPoolDefaultPgNum: 16 CephPoolDefaultPgNum: 16