fuel-plugin-external-emc/doc/source/user.rst

4.2 KiB

Create a Cinder volume

Once you deploy an OpenStack environment with the EMC VNX plugin, you can start creating Cinder volumes. The following example shows how to create a 10 GB volume and attach it to a VM.

  1. Login to a controller node.

  2. Create a Cinder volume:

    # cinder create <VOLUME_SIZE>

    The output looks as follows:

    image

  3. Verify that the volume is created and is ready for use:

    # cinder list

    In the output, verify the ID and the available status of the volume (see the screenshot above).

  4. Verify the volume on EMC VNX:

    1. Add the /opt/Navisphere/bin directory to the PATH environment variable:

      # export PATH=$PATH:/opt/Navisphere/bin
    2. Save your EMC credentials to simplify syntax in succeeding the naviseccli commands:

      # naviseccli -addusersecurity -password <password> -scope 0 \
      -user <username>
    3. List LUNs created on EMC:

      # naviseccli -h <SP IP> lun -list

      image

    In the given example, there is one successfully created LUN with:

    • ID: 0
    • Name: volume-e1626d9e-82e8-4279-808e-5fcd18016720 (naming schema is volume-<Cinder volume id>)
    • Current state: Ready

    The IP address of the EMC VNX SP: 192.168.200.30

  1. Get the Glance image ID and the network ID:

    # glance image-list
    # nova net-list

    image

    The VM ID in the given example is 48e70690-2590-45c7-b01d-6d69322991c3.

  2. Create a new VM using the Glance image ID and the network ID:

    # nova --flavor 2 --image <IMAGE_ID> -- nic net-id=<NIC_NET-ID> <VM_NAME>
  1. Check the STATUS of the new VM and on which node it has been created:

    # nova show <id>

    In the example output, the VM is running on node-3 and is active:

    image

  2. Attach the Cinder volume to the VM and verify its state:

    # nova volume-attach <VM id> <volume id>
    # cinder list

    The output looks as follows:

    image

  1. List the storage groups configured on EMC VNX:

    # naviseccli -h <SP IP> storagegroup -list

    The output looks as follows:

    image

    In the example output, we have:

    • One storage group: node-3 with one LUN attached.
    • Four iSCSI HBA/SP Pairs - one pair per the SP-Port.
    • The LUN that has the local ID 0 (ALU Number) and that is available as LUN 133 (HLU Number) for the node-3.
  1. You can also check whether the iSCSI sessions are active:
# naviseccli -h <SP IP> port -list -hba

The output looks as follows:

image

Check the Logged In parameter of each port. In the example output, all four sessions are active as they have Logged In: YES.

  1. When you log in to node-3, you can verify that:

    • The iSCSI sessions are active:

      # iscsiadm -m session
    • A multipath device has been created by the multipath daemon:

      # multipath -ll
    • The VM is using the multipath device:

      # lsof -n -p `pgrep -f <VM id>` | grep /dev/<DM device name>

    image

    In the example output, we have the following:

    • There are four active sessions (the same as on the EMC).
    • The multipath device dm-2 has been created.
    • The multipath device has four paths and all are running (one per iSCSI session).
    • QEMU is using the /dev/dm-2 multipath device.