Fix #7: Facilitate running example using container

To be able to run the example using the container we need to change our
hosts LVM configuration, which was not explained in the docs.

This patch adds the explanation as well as a simplified way of running
this without touching our own host using Vagrant + libvirt +
Ansible.
This commit is contained in:
Gorka Eguileor 2018-08-29 13:56:26 +02:00
parent 1ae465e50b
commit 713fcf7659
4 changed files with 201 additions and 6 deletions

View File

@ -97,19 +97,51 @@ the easiest to setup and test.
First you need to setup your system.
You can either use a container:
The easiest way to set things up is using Vagrant + libvirt using the provided
docker example, as it will create a small VM (1GB and 1CPU) and provision
everything so we can run a Python interpreter in a cinderlib container:
.. code-block:: shell
$ docker run --name=cinderlib --privileged --net=host \
$ cd examples/docker
$ vagrant up
$ vagrant ssh -c 'sudo docker exec -it cinderlib python'
If we don't want to use the example we have to setup an LVM VG to use:
.. code-block:: shell
$ sudo dd if=/dev/zero of=cinder-volumes bs=1048576 seek=22527 count=1
$ sudo lodevice=`losetup --show -f ./cinder-volumes`
$ sudo vgcreate cinder-volumes $lodevice
$ sudo vgscan --cache
Now we can install everything on baremetal:
$ sudo yum install -y centos-release-openstack-queens
$ test -f /etc/yum/vars/contentdir || echo centos >/etc/yum/vars/contentdir
$ sudo yum install -y openstack-cinder targetcli python-pip
$ sudo pip install cinderlib
Or run it in a container. To be able to run it in a container we need to
change our host's LVM configuration and set `udev_rules = 0` and
`udev_sync = 0` before we start the container:
.. code-block:: shell
$ sudo docker run --name=cinderlib --privileged --net=host \
-v /etc/iscsi:/etc/iscsi \
-v /dev:/dev \
-v /etc/lvm:/etc/lvm \
-v /var/lock/lvm:/var/lock/lvm \
-v /lib/modules:/lib/modules \
-v /run/udev:/run/udev \
-v /etc/localtime:/etc/localtime \
-it akrog/cinderlib python
-v /lib/modules:/lib/modules:ro \
-v /run:/run \
-v /var/lib/iscsi:/var/lib/iscsi \
-v /etc/localtime:/etc/localtime:ro \
-v /root/cinder:/var/lib/cinder \
-v /sys/kernel/config:/configfs \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-it akrog/cinderlib:latest python
Or install things on baremetal/VM:

View File

@ -0,0 +1,42 @@
==============
Docker example
==============
This Vagrant file deploys a small VM (1GB and 1CPU) with cinderlib in a
container and with LVM properly configured to be used by the container.
This makes it really easy to use the containerized version of cinderlib:
.. code-block:: shell
$ vagrant up
$ vagrant ssh -c 'sudo docker exec -it cinderlib python'
Once we've run those two commands we are in a Python interpreter shell and can
run Python code to use the LVM backend:
.. code-block:: python
import cinderlib as cl
# Initialize the LVM driver
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
# Create a 1GB volume
vol = lvm.create_volume(1)
# Export, initialize, and do a local attach of the volume
attach = vol.attach()
print('Volume %s attached to %s' % (vol.id, attach.path))
# Detach it
vol.detach()
# Delete it
vol.delete()

45
examples/docker/Vagrantfile vendored Normal file
View File

@ -0,0 +1,45 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
MEMORY = 1048
CPUS = 1
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.box = "centos/7"
# Override
config.vm.provider :libvirt do |v,override|
override.vm.synced_folder '.', '/home/vagrant/sync', disabled: true
v.memory = MEMORY
v.cpus = CPUS
# Support remote libvirt
$libvirt_host = ENV.fetch('LIBVIRT_HOST', '')
$libvirt_user = ENV.fetch('LIBVIRT_USER', 'root')
v.host = $libvirt_host
if $libvirt_host.empty? || $libvirt_host.nil?
v.connect_via_ssh = false
else
v.username = $libvirt_user
v.connect_via_ssh = true
end
end
# Make kub master
config.vm.define :master do |master|
master.vm.provision :ansible do |ansible|
ansible.limit = "all"
ansible.playbook = "site.yml"
ansible.groups = {
"master_node" => ["master"],
}
# Workaround for issue #644 on Vagrant < v1.8.6
# Replace the ProxyCommand with the command specified by
# vagrant ssh-config
req = Gem::Requirement.new('<1.8.6')
if req.satisfied_by?(Gem::Version.new(Vagrant::VERSION)) and not $libvirt_host.empty?
ansible.raw_ssh_args = "-o 'ProxyCommand=ssh #{$libvirt_host} -l #{$libvirt_user} -i #{Dir.home}/.ssh/id_rsa nc %h %p'"
end
end
end
end

76
examples/docker/site.yml Normal file
View File

@ -0,0 +1,76 @@
- hosts: all
become: yes
become_method: sudo
tasks:
# Accept loop devices for the LVM cinder-volumes VG and reject anything else
- name: Disable new LVM volumes
lineinfile:
path: /etc/lvm/lvm.conf
state: present
insertafter: '# filter ='
line: "\tfilter = [ \"a|loop|\", \"r|.*\\/|\" ]\n\tglobal_filter = [ \"a|loop|\", \"r|.*\\/|\" ]"
# Workaround for lvcreate hanging inside contatiner
# https://serverfault.com/questions/802766/calling-lvcreate-from-inside-the-container-hangs
- lineinfile:
path: /etc/lvm/lvm.conf
state: present
regexp: "^\tudev_sync = 1"
line: "\tudev_sync = 0"
- lineinfile:
path: /etc/lvm/lvm.conf
state: present
regexp: "^\tudev_rules = 1"
line: "\tudev_rules = 0"
- name: Install packages
yum: name={{ item }} state=present
with_items:
- iscsi-initiator-utils
- device-mapper-multipath
- docker
- name: Configure multipath
command: mpathconf --enable --with_multipathd y --user_friendly_names n --find_multipaths y
- name: Enable services
service: name={{ item }} state=restarted enabled=yes
with_items:
- iscsid
- multipathd
- docker
- name: Create LVM backing file
command: truncate -s 10G /root/cinder-volumes
args:
creates: /root/cinder-volumes
- name: Create LVM loopback device
command: losetup --show -f /root/cinder-volumes
register: loop_device
- name: Create PV and VG
shell: "vgcreate cinder-volumes {{loop_device.stdout}}"
- command: vgscan --cache
changed_when: false
- file:
path: /root/cinder
state: directory
- shell: >
docker run --name=cinderlib --privileged --net=host
-v /etc/iscsi:/etc/iscsi
-v /dev:/dev
-v /etc/lvm:/etc/lvm
-v /var/lock/lvm:/var/lock/lvm
-v /lib/modules:/lib/modules:ro
-v /run:/run
-v /var/lib/iscsi:/var/lib/iscsi
-v /etc/localtime:/etc/localtime:ro
-v /root/cinder:/var/lib/cinder
-v /sys/kernel/config:/configfs
-v /sys/fs/cgroup:/sys/fs/cgroup:ro
-d akrog/cinderlib:latest sleep 365d