openstack-manuals/doc/config-reference/locale/config-reference.pot

12142 lines
677 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"POT-Creation-Date: 2014-12-17 06:12+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
#: ./doc/config-reference/ch_baremetalconfigure.xml:7(title)
msgid "Bare metal"
msgstr ""
#: ./doc/config-reference/ch_baremetalconfigure.xml:8(para)
msgid "The Bare metal module is capable of managing and provisioning physical machines. The configuration file of this module is <filename>/etc/ironic/ironic.conf</filename>."
msgstr ""
#: ./doc/config-reference/ch_baremetalconfigure.xml:11(para)
msgid "The following tables provide a comprehensive list of the Bare metal module configuration options."
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:7(title)
msgid "Identity service"
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:8(para)
msgid "This chapter details the OpenStack Identity service configuration options. For installation prerequisites and step-by-step walkthroughs, see the <citetitle>OpenStack Installation Guide</citetitle> for your distribution (<link href=\"http://docs.openstack.org\">docs.openstack.org</link>) and <citetitle><link href=\"http://docs.openstack.org/admin-guide-cloud/content/\">Cloud Administrator Guide</link></citetitle>."
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:16(title)
msgid "Caching layer"
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:17(para)
msgid "Identity supports a caching layer that is above the configurable subsystems, such as token or assignment. The majority of the caching configuration options are set in the <literal>[cache]</literal> section. However, each section that has the capability to be cached usually has a <option>caching</option> option that will toggle caching for that specific section. By default, caching is globally disabled. Options are as follows:"
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:28(para)
msgid "Current functional backends are:"
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:30(para)
msgid "<literal>dogpile.cache.memcached</literal> - Memcached backend using the standard python-memcached library"
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:32(para)
msgid "<literal>dogpile.cache.pylibmc</literal> - Memcached backend using the pylibmc library"
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:34(para)
msgid "<literal>dogpile.cache.bmemcached</literal> - Memcached using python-binary-memcached library."
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:36(para)
msgid "<literal>dogpile.cache.redis</literal> - Redis backend"
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:37(para)
msgid "<literal>dogpile.cache.dbm</literal> - Local DBM file backend"
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:38(para)
msgid "<literal>dogpile.cache.memory</literal> - In-memory cache, not suitable for use outside of testing as it does not cleanup it's internal cache on cache expiration and does not share cache between processes. This means that caching and cache invalidation will not be consistent or reliable."
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:44(para)
msgid "<literal>dogpile.cache.mongo</literal> - MongoDB as caching backend."
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:51(title)
msgid "Identity service configuration file"
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:52(para)
msgid "The Identity service is configured in the <filename>/etc/keystone/keystone.conf</filename> file."
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml:54(para)
msgid "The following tables provide a comprehensive list of the Identity service options."
msgstr ""
#: ./doc/config-reference/ch_dataprocessingserviceconfigure.xml:7(title)
msgid "Data processing service"
msgstr ""
#: ./doc/config-reference/ch_dataprocessingserviceconfigure.xml:8(para)
msgid "The Data processing service (sahara) provides a scalable data-processing stack and associated management interfaces."
msgstr ""
#: ./doc/config-reference/ch_dataprocessingserviceconfigure.xml:13(para)
msgid "The following tables provide a comprehensive list of the Data processing service configuration options."
msgstr ""
#: ./doc/config-reference/ch_telemetryconfigure.xml:7(title) ./doc/config-reference/ch_config-overview.xml:25(para)
msgid "Telemetry"
msgstr ""
#: ./doc/config-reference/ch_telemetryconfigure.xml:8(para)
msgid "The Telemetry service collects measurements within OpenStack. Its various agents and services are configured in the <filename>/etc/ceilometer/ceilometer.conf</filename> file."
msgstr ""
#: ./doc/config-reference/ch_telemetryconfigure.xml:11(para)
msgid "To install Telemetry, see the <citetitle>OpenStack Installation Guide</citetitle> for your distribution (<link href=\"http://docs.openstack.org\">docs.openstack.org</link>)."
msgstr ""
#: ./doc/config-reference/ch_telemetryconfigure.xml:15(para)
msgid "The following tables provide a comprehensive list of the Telemetry configuration options."
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml:7(title)
msgid "OpenStack configuration overview"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml:8(para)
msgid "OpenStack is a collection of open source project components that enable setting up cloud services. Each component uses similar configuration techniques and a common framework for INI file options."
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml:13(para)
msgid "This guide pulls together multiple references and configuration options for the following OpenStack components:"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml:17(para)
msgid "OpenStack Block Storage"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml:18(para)
msgid "OpenStack Compute"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml:19(para)
msgid "OpenStack Dashboard"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml:20(para) ./doc/config-reference/ch_databaseserviceconfigure.xml:7(title)
msgid "Database Service"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml:21(para)
msgid "OpenStack Identity"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml:22(para)
msgid "OpenStack Image Service"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml:23(para)
msgid "OpenStack Networking"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml:24(para)
msgid "OpenStack Object Storage"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml:26(para) ./doc/config-reference/ch_orchestrationconfigure.xml:7(title)
msgid "Orchestration"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:8(title) ./doc/config-reference/bk-config-ref.xml:13(titleabbrev)
msgid "OpenStack Configuration Reference"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:21(orgname) ./doc/config-reference/bk-config-ref.xml:27(holder)
msgid "OpenStack Foundation"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:25(year)
msgid "2013"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:26(year)
msgid "2014"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:29(productname)
msgid "OpenStack"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:30(releaseinfo)
msgid "juno"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:34(remark)
msgid "Copyright details are filled in by the template."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:39(para)
msgid "This document is for system administrators who want to look up configuration options. It contains lists of configuration options available with OpenStack and uses auto-generation to generate options and the descriptions from the code for each project. It includes sample configuration files."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:49(date)
msgid "2014-10-15"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:53(para)
msgid "Updates for Juno: updated all configuration tables, include sample configuration files, add chapter for Data processing service, update and enhance driver configuration."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:64(date)
msgid "2014-04-16"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:68(para)
msgid "Update for Icehouse: Updated all configuration tables, include sample configuration files, add chapters for Database Service, Orchestration, and Telemetry."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:79(date)
msgid "2014-03-11"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:83(para)
msgid "Sorted component listing. Moved procedures to the <link href=\"http://docs.openstack.org/admin-guide-cloud/content/\"><citetitle>Cloud Administrator Guide</citetitle></link>"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:92(date)
msgid "2014-01-09"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:96(para)
msgid "Removes content addressed in installation, merges duplicated content, and revises legacy references."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:105(date)
msgid "2013-10-17"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:109(para)
msgid "Havana release."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:115(date)
msgid "2013-08-16"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:119(para)
msgid "Moves Block Storage driver configuration information from the <citetitle>Block Storage Administration Guide</citetitle> to this reference."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:129(date)
msgid "2013-06-10"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml:133(para)
msgid "Initial creation of Configuration Reference."
msgstr ""
#: ./doc/config-reference/ch_orchestrationconfigure.xml:8(para)
msgid "The Orchestration service is designed to manage the lifecycle of infrastructure and applications within OpenStack clouds. Its various agents and services are configured in the <filename>/etc/heat/heat.conf</filename> file."
msgstr ""
#: ./doc/config-reference/ch_orchestrationconfigure.xml:12(para)
msgid "To install Orchestration, see the <citetitle>OpenStack Installation Guide</citetitle> for your distribution (<link href=\"http://docs.openstack.org\">docs.openstack.org</link>)."
msgstr ""
#: ./doc/config-reference/ch_orchestrationconfigure.xml:16(para)
msgid "The following tables provide a comprehensive list of the Orchestration configuration options."
msgstr ""
#: ./doc/config-reference/ch_blockstorageconfigure.xml:7(title)
msgid "Block Storage"
msgstr ""
#: ./doc/config-reference/ch_blockstorageconfigure.xml:8(para)
msgid "The OpenStack Block Storage service works with many different storage drivers that you can configure by using these instructions."
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:6(caption)
msgid "Default ports that secondary services related to OpenStack components use"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:13(th)
msgid "Service"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:14(th)
msgid "Default port"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:15(th)
msgid "Used by"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:20(td)
msgid "HTTP"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:21(td) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:349(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:406(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:485(replaceable)
msgid "80"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:22(literal)
msgid "Horizon"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:22(td)
msgid "OpenStack dashboard (<placeholder-1/>) when it is not configured to use secure access."
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:27(td)
msgid "HTTP alternate"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:28(td)
msgid "8080"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:30(literal) ./doc/config-reference/table_default-ports-primary-services.xml:83(literal)
msgid "swift"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:29(td)
msgid "OpenStack Object Storage (<placeholder-1/>) service."
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:33(td)
msgid "HTTPS"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:34(td)
msgid "443"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:35(td)
msgid "Any OpenStack service that is enabled for SSL, especially secure-access dashboard."
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:39(td)
msgid "rsync"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:40(td)
msgid "873"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:41(td)
msgid "OpenStack Object Storage. Required."
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:44(td)
msgid "iSCSI target"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:45(td)
msgid "3260"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:46(td)
msgid "OpenStack Block Storage. Required."
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:49(td)
msgid "MySQL database service"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:50(td)
msgid "3306"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:51(td)
msgid "Most OpenStack components."
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:54(td)
msgid "Message Broker (AMQP traffic)"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:55(td)
msgid "5672"
msgstr ""
#: ./doc/config-reference/table_default-ports-peripheral-services.xml:56(td)
msgid "OpenStack Block Storage, Networking, Orchestration, and Compute."
msgstr ""
#: ./doc/config-reference/ch_imageservice.xml:7(title)
msgid "Image Service"
msgstr ""
#: ./doc/config-reference/ch_imageservice.xml:8(para)
msgid "Compute relies on an external image service to store virtual machine images and maintain a catalog of available images. By default, Compute is configured to use the OpenStack Image Service (Glance), which is currently the only supported image service."
msgstr ""
#: ./doc/config-reference/ch_imageservice.xml:15(para)
msgid "If your installation requires euca2ools to register new images, you must run the <systemitem class=\"service\">nova-objectstore</systemitem> service. This service provides an Amazon S3 front-end for Glance, which is required by euca2ools."
msgstr ""
#: ./doc/config-reference/ch_imageservice.xml:20(para)
msgid "To customize the Compute Service, use the configuration option settings documented in <xref linkend=\"config_table_nova_glance\"/> and <xref linkend=\"config_table_nova_s3\"/>."
msgstr ""
#: ./doc/config-reference/ch_imageservice.xml:24(para)
msgid "You can modify many options in the OpenStack Image Service. The following tables provide a comprehensive list."
msgstr ""
#: ./doc/config-reference/ch_databaseserviceconfigure.xml:8(para)
msgid "The Database Service provides a scalable and reliable Cloud Database-as-a-Service functionality for both relational and non-relational database engines."
msgstr ""
#: ./doc/config-reference/ch_databaseserviceconfigure.xml:12(para)
msgid "The following tables provide a comprehensive list of the Database Service configuration options."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:7(title)
msgid "Object Storage"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:8(para)
msgid "OpenStack Object Storage uses multiple configuration files for multiple services and background daemons, and <placeholder-1/> to manage server configurations. Default configuration options appear in the <code>[DEFAULT]</code> section. You can override the default values by setting values in the other sections."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:18(title)
msgid "Object server configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:19(para)
msgid "Find an example object server configuration at <filename>etc/object-server.conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:22(para) ./doc/config-reference/ch_objectstorageconfigure.xml:51(para) ./doc/config-reference/ch_objectstorageconfigure.xml:76(para) ./doc/config-reference/ch_objectstorageconfigure.xml:109(para) ./doc/config-reference/ch_objectstorageconfigure.xml:126(para) ./doc/config-reference/ch_objectstorageconfigure.xml:153(para) ./doc/config-reference/ch_objectstorageconfigure.xml:182(para) ./doc/config-reference/ch_objectstorageconfigure.xml:225(para) ./doc/config-reference/ch_objectstorageconfigure.xml:234(para)
msgid "The available configuration options are:"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:42(title)
msgid "Sample object server configuration file"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:47(title)
msgid "Object expirer configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:48(para)
msgid "Find an example object expirer configuration at <filename>etc/object-expirer.conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:67(title)
msgid "Sample object expirer configuration file"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:72(title)
msgid "Container server configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:73(para)
msgid "Find an example container server configuration at <filename>etc/container-server.conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:99(title)
msgid "Sample container server configuration file"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:105(title)
msgid "Container sync realms configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:106(para)
msgid "Find an example container sync realms configuration at <filename>etc/container-sync-realms.conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:117(title)
msgid "Sample container sync realms configuration file"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:122(title)
msgid "Container reconciler configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:123(para)
msgid "Find an example container sync realms configuration at <filename>etc/container-reconciler.conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:142(title)
msgid "Sample container sync reconciler configuration file"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:149(title)
msgid "Account server configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:150(para)
msgid "Find an example account server configuration at <filename>etc/account-server.conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:173(title)
msgid "Sample account server configuration file"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:178(title)
msgid "Proxy server configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:179(para)
msgid "Find an example proxy server configuration at <filename>etc/proxy-server.conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:216(title)
msgid "Sample proxy server configuration file"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:221(title)
msgid "Proxy server memcache configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:222(para)
msgid "Find an example memcache configuration for the proxy server at <filename>etc/memcache.conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:230(title)
msgid "Rsyncd configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml:231(para)
msgid "Find an example rsyncd configuration at <filename>etc/rsyncd.conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:6(caption)
msgid "Default ports that OpenStack components use"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:12(th)
msgid "OpenStack service"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:13(th)
msgid "Default ports"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:14(th)
msgid "Port type"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:19(literal)
msgid "cinder"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:19(td)
msgid "Block Storage (<placeholder-1/>)"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:20(td)
msgid "8776"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:21(td) ./doc/config-reference/table_default-ports-primary-services.xml:26(td) ./doc/config-reference/table_default-ports-primary-services.xml:70(td) ./doc/config-reference/table_default-ports-primary-services.xml:80(td) ./doc/config-reference/table_default-ports-primary-services.xml:91(td) ./doc/config-reference/table_default-ports-primary-services.xml:108(td)
msgid "publicurl and adminurl"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:24(literal)
msgid "nova"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:24(td)
msgid "Compute (<placeholder-1/>) endpoints"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:25(td)
msgid "8774"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:29(literal) ./doc/config-reference/compute/section_nova-log-files.xml:38(td)
msgid "nova-api"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:29(td)
msgid "Compute API (<placeholder-1/>)"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:30(td)
msgid "8773, 8775"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:34(td)
msgid "Compute ports for access to virtual machine consoles"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:36(td)
msgid "5900-5999"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:41(systemitem)
msgid "openstack-nova-novncproxy"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:40(td)
msgid "Compute VNC proxy for browsers ( <placeholder-1/>)"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:42(td)
msgid "6080"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:46(systemitem)
msgid "openstack-nova-xvpvncproxy"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:45(td)
msgid "Compute VNC proxy for traditional VNC clients (<placeholder-1/>)"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:47(td)
msgid "6081"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:51(td)
msgid "Proxy port for HTML5 console used by Compute service"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:53(td)
msgid "6082"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:57(literal)
msgid "keystone"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:57(td)
msgid "Identity service (<placeholder-1/>) administrative endpoint"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:59(td)
msgid "35357"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:60(td)
msgid "adminurl"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:63(td)
msgid "Identity service public endpoint"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:64(td)
msgid "5000"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:65(td)
msgid "publicurl"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:68(literal)
msgid "glance"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:68(td)
msgid "Image Service (<placeholder-1/>) API"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:69(td)
msgid "9292"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:73(td)
msgid "Image Service registry"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:74(td)
msgid "9191"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:78(literal)
msgid "neutron"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:78(td)
msgid "Networking (<placeholder-1/>)"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:79(td)
msgid "9696"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:83(td)
msgid "Object Storage (<placeholder-1/>)"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:84(td)
msgid "6000, 6001, 6002"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:88(literal)
msgid "heat"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:88(td)
msgid "Orchestration (<placeholder-1/>) endpoint"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:90(td)
msgid "8004"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:95(literal)
msgid "openstack-heat-api-cfn"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:94(td)
msgid "Orchestration AWS CloudFormation-compatible API (<placeholder-1/>)"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:96(td)
msgid "8000"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:101(literal)
msgid "openstack-heat-api-cloudwatch"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:100(td)
msgid "Orchestration AWS CloudWatch-compatible API (<placeholder-1/>)"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:102(td)
msgid "8003"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:106(literal)
msgid "ceilometer"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:106(td)
msgid "Telemetry (<placeholder-1/>)"
msgstr ""
#: ./doc/config-reference/table_default-ports-primary-services.xml:107(td)
msgid "8777"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:7(title) ./doc/config-reference/networking/section_networking-options-reference.xml:38(title)
msgid "Compute"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:8(para)
msgid "The OpenStack Compute service is a cloud computing fabric controller, which is the main part of an IaaS system. You can use OpenStack Compute to host and manage cloud computing systems. This section describes the OpenStack Compute configuration options."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:13(para)
msgid "To configure your Compute installation, you must define configuration options in these files:"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:17(para)
msgid "<filename>nova.conf</filename>. Contains most of the Compute configuration options. Resides in the <filename>/etc/nova</filename> directory."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:22(para)
msgid "<filename>api-paste.ini</filename>. Defines Compute limits. Resides in the <filename>/etc/nova</filename> directory."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:27(para)
msgid "Related Image Service and Identity service management configuration files."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:33(title)
msgid "Configure logging"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:34(para)
msgid "You can use <filename>nova.conf</filename> file to configure where Compute logs events, the level of logging, and log formats."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:37(para)
msgid "To customize log formats for OpenStack Compute, use the configuration option settings documented in <xref linkend=\"config_table_nova_logging\"/>."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:42(title)
msgid "Configure authentication and authorization"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:43(para)
msgid "There are different methods of authentication for the OpenStack Compute project, including no authentication. The preferred system is the OpenStack Identity service, code-named Keystone."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:47(para)
msgid "To customize authorization settings for Compute, use the configuration options documented in <xref linkend=\"config_table_nova_authentication\"/>."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:50(para)
msgid "To customize certificate authority settings for Compute, use the configuration options documented in <xref linkend=\"config_table_nova_ca\"/>."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:53(para)
msgid "To customize Compute and the Identity service to use LDAP as a backend, refer to the configuration options documented in <xref linkend=\"config_table_nova_ldap\"/>."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:59(title)
msgid "Configure resize"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:60(para)
msgid "Resize (or Server resize) is the ability to change the flavor of a server, thus allowing it to upscale or downscale according to user needs. For this feature to work properly, you might need to configure some underlying virt layers."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:65(title) ./doc/config-reference/compute/section_hypervisor_kvm.xml:8(title)
msgid "KVM"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:66(para)
msgid "Resize on KVM is implemented currently by transferring the images between compute nodes over ssh. For KVM you need hostnames to resolve properly and passwordless ssh access between your compute hosts. Direct access from one compute host to another is needed to copy the VM file across."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:71(para)
msgid "Cloud end users can find out how to resize a server by reading the <link href=\"http://docs.openstack.org/user-guide/content/nova_cli_resize.html\">OpenStack End User Guide</link>."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:77(title) ./doc/config-reference/compute/section_introduction-to-xen.xml:59(title)
msgid "XenServer"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml:78(para)
msgid "To get resize to work with XenServer (and XCP), you need to establish a root trust between all hypervisor nodes and provide an /image mount point to your hypervisors dom0."
msgstr ""
#: ./doc/config-reference/app_firewalls-ports.xml:7(title)
msgid "Firewalls and default ports"
msgstr ""
#: ./doc/config-reference/app_firewalls-ports.xml:8(para)
msgid "On some deployments, such as ones where restrictive firewalls are in place, you might need to manually configure a firewall to permit OpenStack service traffic."
msgstr ""
#: ./doc/config-reference/app_firewalls-ports.xml:11(para)
msgid "To manually configure a firewall, you must permit traffic through the ports that each OpenStack service uses. This table lists the default ports that each OpenStack service uses:"
msgstr ""
#: ./doc/config-reference/app_firewalls-ports.xml:16(para)
msgid "To function properly, some OpenStack components depend on other, non-OpenStack services. For example, the OpenStack dashboard uses HTTP for non-secure communication. In this case, you must configure the firewall to allow traffic to and from HTTP."
msgstr ""
#: ./doc/config-reference/app_firewalls-ports.xml:21(para)
msgid "This table lists the ports that other OpenStack components use:"
msgstr ""
#: ./doc/config-reference/app_firewalls-ports.xml:24(para)
msgid "On some deployments, the default port used by a service may fall within the defined local port range of a host. To check a host's local port range:"
msgstr ""
#: ./doc/config-reference/app_firewalls-ports.xml:28(para)
msgid "If a service's default port falls within this range, run the following program to check if the port has already been assigned to another application:"
msgstr ""
#: ./doc/config-reference/app_firewalls-ports.xml:31(replaceable) ./doc/config-reference/compute/section_compute-cells.xml:316(replaceable)
msgid "PORT"
msgstr ""
#: ./doc/config-reference/app_firewalls-ports.xml:32(para)
msgid "Configure the service to use a different port if the default port is already being used by another application."
msgstr ""
#: ./doc/config-reference/ch_networkingconfigure.xml:7(title)
msgid "Networking"
msgstr ""
#: ./doc/config-reference/ch_networkingconfigure.xml:8(para)
msgid "This chapter explains the OpenStack Networking configuration options. For installation prerequisites, steps, and use cases, see the <citetitle>OpenStack Installation Guide</citetitle> for your distribution (<link href=\"http://docs.openstack.org\">docs.openstack.org</link>) and <citetitle><link href=\"http://docs.openstack.org/admin-guide-cloud/content/\">Cloud Administrator Guide</link></citetitle>."
msgstr ""
#: ./doc/config-reference/ch_dashboardconfigure.xml:7(title)
msgid "Dashboard"
msgstr ""
#: ./doc/config-reference/ch_dashboardconfigure.xml:8(para)
msgid "This chapter describes how to configure the OpenStack dashboard with Apache web server."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:7(title)
msgid "Networking configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:8(para)
msgid "The options and descriptions listed in this introduction are auto generated from the code in the Networking service project, which provides software-defined networking between VMs run in Compute. The list contains common options, while the subsections list the options for the various networking plug-ins."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:20(title) ./doc/config-reference/compute/section_compute-configure-xen.xml:36(title)
msgid "Agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:21(para)
msgid "Use the following options to alter agent-related settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:26(title)
msgid "API"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:27(para)
msgid "Use the following options to alter API-related settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:32(title)
msgid "Token authentication"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:33(para)
msgid "Use the following options to alter token authentication settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:39(para)
msgid "Use the following options to alter Compute-related settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:44(title)
msgid "Database"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:45(para) ./doc/config-reference/networking/section_networking-options-reference.xml:57(para)
msgid "Use the following options to alter Database-related settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:50(title) ./doc/config-reference/networking/section_networking-options-reference.xml:105(title)
msgid "Logging"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:51(para)
msgid "Use the following options to alter debug settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:56(title)
msgid "DHCP agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:62(title)
msgid "Distributed virtual router"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:63(para)
msgid "Use the following options to alter DVR-related settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:68(title)
msgid "Embrane LBaaS driver"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:69(para)
msgid "Use the following options to alter Embrane Load-Balancer-as-a-Service related settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:75(title)
msgid "Firewall-as-a-Service driver"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:76(para)
msgid "Use the following options in the <filename>fwaas_driver.ini</filename> file for the FWaaS driver."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:82(title)
msgid "IPv6 router advertisement"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:83(para)
msgid "Use the following options to alter IPv6 RA settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:88(title)
msgid "L3 agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:89(para)
msgid "Use the following options in the <filename>l3_agent.ini</filename> file for the L3 agent."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:95(title)
msgid "Load-Balancer-as-a-Service agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:96(para)
msgid "Use the following options in the <filename>lbaas_agent.ini</filename> file for the LBaaS agent."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:106(para)
msgid "Use the following options to alter logging settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:111(title)
msgid "Metadata Agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:112(para)
msgid "Use the following options in the <filename>metadata_agent.ini</filename> file for the Metadata agent."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:118(title)
msgid "Metering Agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:119(para)
msgid "Use the following options in the <filename>metering_agent.ini</filename> file for the Metering agent."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:125(title)
msgid "Policy"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:126(para)
msgid "Use the following options in the <filename>neutron.conf</filename> file to change policy settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:132(title)
msgid "Quotas"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:133(para)
msgid "Use the following options in the <filename>neutron.conf</filename> file for the quota system."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:139(title)
msgid "Rootwrap"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:140(para)
msgid "Use the following options in the <filename>neutron.conf</filename> file for the rootwrap settings"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:146(title)
msgid "Scheduler"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:147(para)
msgid "Use the following options in the <filename>neutron.conf</filename> file to change scheduler settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:153(title)
msgid "Security Groups"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:154(para)
msgid "Use the following options in the configuration file for your driver to change security group settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:160(title)
msgid "SSL and Certification Authority"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:161(para)
msgid "Use the following options in the <filename>neutron.conf</filename> file to enable SSL."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:167(title)
msgid "Testing"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:168(para)
msgid "Use the following options to alter testing-related features."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:173(title)
msgid "vArmour Firewall-as-a-Service driver"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:174(para)
msgid "Use the following options in the <filename>l3_agent.ini</filename> file for the vArmour FWaaS driver."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:180(title)
msgid "VPN"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml:181(para)
msgid "Use the following options in the <filename>vpn_agent.ini</filename> file for the VPN agent."
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:6(title)
msgid "Networking sample configuration files"
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:7(para)
msgid "All the files in this section can be found in <filename class=\"directory\">/etc/neutron/</filename>."
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:9(title)
msgid "neutron.conf"
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:10(para)
msgid "Use the <filename>neutron.conf</filename> file to configure the majority of the OpenStack Networking options."
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:17(title) ./doc/config-reference/block-storage/section_block-storage-sample-configuration-files.xml:21(title) ./doc/config-reference/compute/section_compute-sample-configuration-files.xml:9(title)
msgid "api-paste.ini"
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:18(para)
msgid "Use the <filename>api-paste.ini</filename> to configure the OpenStack Networking API."
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:24(title) ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:37(title) ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:38(title) ./doc/config-reference/block-storage/section_block-storage-sample-configuration-files.xml:28(title) ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:53(title) ./doc/config-reference/compute/section_compute-sample-configuration-files.xml:17(title)
msgid "policy.json"
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:25(para)
msgid "Use the <filename>policy.json</filename> file to define additional access controls that apply to the OpenStack Networking service."
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:32(title) ./doc/config-reference/block-storage/section_block-storage-sample-configuration-files.xml:33(title) ./doc/config-reference/compute/section_compute-sample-configuration-files.xml:24(title)
msgid "rootwrap.conf"
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:33(para)
msgid "Use the <filename>rootwrap.conf</filename> file to define configuration values used by the <placeholder-1/> script when the OpenStack Networking service must escalate its privileges to those of the root user."
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:41(title)
msgid "Configuration files for plug-in agents"
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:42(para)
msgid "Each plug-in agent that runs on an OpenStack Networking node, to perform local networking configuration for the node's VMs and networking services, has its own configuration file."
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:46(title)
msgid "dhcp_agent.ini"
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:52(title)
msgid "l3_agent.ini"
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:58(title)
msgid "lbaas_agent.ini"
msgstr ""
#: ./doc/config-reference/networking/section_networking-sample-configuration-files.xml:64(title)
msgid "metadata_agent.ini"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:7(title)
msgid "Networking plug-ins"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:8(para)
msgid "OpenStack Networking introduces the concept of a plug-in, which is a back-end implementation of the OpenStack Networking API. A plug-in can use a variety of technologies to implement the logical API requests. Some OpenStack Networking plug-ins might use basic Linux VLANs and IP tables, while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow. These sections detail the configuration options for the various plug-ins."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:18(para)
msgid "Ryu plugin has been removed in Kilo. Ryu team recommends to migrate to ML2 plugin with ofagent mechanism driver. However, it isn't a functionality equivalent. No mechanical upgrade procedure is provided."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:23(title)
msgid "BigSwitch configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:29(title)
msgid "Brocade configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:35(title)
msgid "CISCO configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:44(title)
msgid "CloudBase Hyper-V Agent configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:51(title)
msgid "Embrane configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:58(title)
msgid "IBM SDN-VE configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:64(title)
msgid "Linux bridge Agent configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:71(title)
msgid "Mellanox configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:77(title)
msgid "Meta Plug-in configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:78(para)
msgid "The Meta Plug-in allows you to use multiple plug-ins at the same time."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:86(title)
msgid "MidoNet configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:92(title)
msgid "NEC configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:98(title)
msgid "Nuage configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:104(title)
msgid "One Convergence NVSD configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:110(title)
msgid "OpenContrail configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:116(title)
msgid "Open vSwitch Agent configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:123(title)
msgid "PLUMgrid configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:129(title)
msgid "SR-IOV configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml:135(title)
msgid "VMware NSX configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:7(title)
msgid "Modular Layer 2 (ml2) configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:8(para)
msgid "The Modular Layer 2 (ml2) plug-in has two components: network types and mechanisms. You can configure these components separately. This section describes these configuration options."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:13(title)
msgid "Configure MTU for VXLAN tunnelling"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:14(para)
msgid "Specific MTU configuration is necessary for VXLAN to function as expected:"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:18(para)
msgid "One option is to increase the MTU value of the physical interface and physical switch fabric by at least 50 bytes. For example, increase the MTU value to 1550. This value enables an automatic 50-byte MTU difference between the physical interface (1500) and the VXLAN interface (automatically 1500-50 = 1450). An MTU value of 1450 causes issues when virtual machine taps are configured at an MTU value of 1500."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:28(para)
msgid "Another option is to decrease the virtual ethernet devices' MTU. Set the <option>network_device_mtu</option> option to 1450 in the <filename>neutron.conf</filename> file, and set all guest virtual machines' MTU to the same value by using a DHCP option. For information about how to use this option, see <link href=\"http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html#openvswitch_plugin\"> Configure OVS plug-in</link>."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:40(title)
msgid "Modular Layer 2 (ml2) Flat Type configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:45(title)
msgid "Modular Layer 2 (ml2) GRE Type configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:50(title)
msgid "Modular Layer 2 (ml2) VLAN Type configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:55(title)
msgid "Modular Layer 2 (ml2) VXLAN Type configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:60(title)
msgid "Modular Layer 2 (ml2) Arista Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:67(title)
msgid "Modular Layer 2 (ml2) BigSwitch Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:72(title)
msgid "Modular Layer 2 (ml2) Brocade Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:77(title)
msgid "Modular Layer 2 (ml2) Cisco Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:82(title)
msgid "Modular Layer 2 (ml2) Freescale SDN Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:87(title)
msgid "Modular Layer 2 (ml2) Mellanox Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:92(title)
msgid "Modular Layer 2 (ml2) OpenDaylight Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:97(title)
msgid "Modular Layer 2 (ml2) OpenFlow Agent (ofagent) Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:102(title)
msgid "Modular Layer 2 (ml2) L2 Population Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:107(title)
msgid "Modular Layer 2 (ml2) Tail-f NCS Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml:112(title)
msgid "Modular Layer 2 (ml2) SR-IOV Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:7(title)
msgid "Log files used by Networking"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:8(para)
msgid "The corresponding log file of each Networking service is stored in the <filename>/var/log/neutron/</filename> directory of the host on which each service runs."
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:12(caption)
msgid "Log files used by Networking services"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:17(td) ./doc/config-reference/dashboard/section_dashboard-log-files.xml:21(td) ./doc/config-reference/block-storage/section_cinder-log-files.xml:18(td) ./doc/config-reference/compute/section_nova-log-files.xml:18(td)
msgid "Log file"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:20(td)
msgid "Service/interface"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:28(filename)
msgid "dhcp-agent.log"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:31(systemitem)
msgid "neutron-dhcp-agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:36(filename)
msgid "l3-agent.log"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:39(systemitem)
msgid "neutron-l3-agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:44(filename)
msgid "lbaas-agent.log"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:47(systemitem)
msgid "neutron-lbaas-agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:49(para)
msgid "The <systemitem class=\"service\">neutron-lbaas-agent</systemitem> service only runs when Load-Balancer-as-a-Service is enabled."
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:57(filename)
msgid "linuxbridge-agent.log"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:60(systemitem)
msgid "neutron-linuxbridge-agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:65(filename)
msgid "metadata-agent.log"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:68(systemitem)
msgid "neutron-metadata-agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:73(filename)
msgid "metering-agent.log"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:76(systemitem)
msgid "neutron-metering-agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:81(filename)
msgid "openvswitch-agent.log"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:84(systemitem)
msgid "neutron-openvswitch-agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:89(filename)
msgid "server.log"
msgstr ""
#: ./doc/config-reference/networking/section_networking-log-files.xml:92(systemitem)
msgid "neutron-server"
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:8(title) ./doc/config-reference/compute/section_rpc.xml:8(title)
msgid "Configure the Oslo RPC messaging system"
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:10(para) ./doc/config-reference/orchestration/section_orchestration-rpc.xml:8(para)
msgid "OpenStack projects use an open standard for messaging middleware known as AMQP. This messaging middleware enables the OpenStack services that run on multiple servers to talk to each other. OpenStack Oslo RPC supports three implementations of AMQP: <application>RabbitMQ</application>, <application>Qpid</application>, and <application>ZeroMQ</application>."
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:19(title) ./doc/config-reference/database-service/section-databaseservice-rpc.xml:17(title) ./doc/config-reference/orchestration/section_orchestration-rpc.xml:17(title) ./doc/config-reference/compute/section_rpc.xml:16(title)
msgid "Configure RabbitMQ"
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:21(para)
msgid "OpenStack Oslo RPC uses <application>RabbitMQ</application> by default. Use these options to configure the <application>RabbitMQ</application> message system. The <option>rpc_backend</option> option is optional as long as <application>RabbitMQ</application> is the default messaging system. However, if it is included the configuration, you must set it to <literal>neutron.openstack.common.rpc.impl_kombu</literal>."
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:35(para)
msgid "Use these options to configure the <application>RabbitMQ</application> messaging system. You can configure messaging communication for different installation scenarios, tune retries for RabbitMQ, and define the size of the RPC thread pool. To monitor notifications through RabbitMQ, you must set the <option>notification_driver</option> option to <literal>neutron.openstack.common.notifier.rpc_notifier</literal> in the <filename>neutron.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:46(title) ./doc/config-reference/database-service/section-databaseservice-rpc.xml:24(title) ./doc/config-reference/orchestration/section_orchestration-rpc.xml:42(title) ./doc/config-reference/compute/section_rpc.xml:39(title)
msgid "Configure Qpid"
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:47(para)
msgid "Use these options to configure the <application>Qpid</application> messaging system for OpenStack Oslo RPC. <application>Qpid</application> is not the default messaging system, so you must enable it by setting the <option>rpc_backend</option> option in the <filename>neutron.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:56(para)
msgid "This critical option points the compute nodes to the <application>Qpid</application> broker (server). Set the <option>qpid_hostname</option> option to the host name where the broker runs in the <filename>neutron.conf</filename> file."
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:62(para) ./doc/config-reference/compute/section_rpc.xml:52(para)
msgid "The <parameter>--qpid_hostname</parameter> option accepts a host name or IP address value."
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:69(para) ./doc/config-reference/orchestration/section_orchestration-rpc.xml:61(para) ./doc/config-reference/compute/section_rpc.xml:56(para)
msgid "If the <application>Qpid</application> broker listens on a port other than the AMQP default of <literal>5672</literal>, you must set the <option>qpid_port</option> option to that value:"
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:78(para) ./doc/config-reference/orchestration/section_orchestration-rpc.xml:68(para) ./doc/config-reference/compute/section_rpc.xml:61(para)
msgid "If you configure the <application>Qpid</application> broker to require authentication, you must add a user name and password to the configuration:"
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:87(para) ./doc/config-reference/orchestration/section_orchestration-rpc.xml:74(para) ./doc/config-reference/compute/section_rpc.xml:66(para)
msgid "By default, TCP is used as the transport. To enable SSL, set the <option>qpid_protocol</option> option:"
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:94(para) ./doc/config-reference/orchestration/section_orchestration-rpc.xml:79(para)
msgid "Use these additional options to configure the Qpid messaging driver for OpenStack Oslo RPC. These options are used infrequently."
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:102(title) ./doc/config-reference/orchestration/section_orchestration-rpc.xml:87(title) ./doc/config-reference/compute/section_rpc.xml:75(title)
msgid "Configure ZeroMQ"
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:103(para)
msgid "Use these options to configure the <application>ZeroMQ</application> messaging system for OpenStack Oslo RPC. <application>ZeroMQ</application> is not the default messaging system, so you must enable it by setting the <option>rpc_backend</option> option in the <filename>neutron.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:112(title) ./doc/config-reference/database-service/section-databaseservice-rpc.xml:38(title) ./doc/config-reference/orchestration/section_orchestration-rpc.xml:97(title) ./doc/config-reference/compute/section_rpc.xml:85(title)
msgid "Configure messaging"
msgstr ""
#: ./doc/config-reference/networking/section_rpc-for-networking.xml:114(para) ./doc/config-reference/database-service/section-databaseservice-rpc.xml:40(para) ./doc/config-reference/orchestration/section_orchestration-rpc.xml:99(para)
msgid "Use these common options to configure the <application>RabbitMQ</application>, <application>Qpid</application>, and <application>ZeroMq</application> messaging drivers:"
msgstr ""
#: ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:6(title)
msgid "Telemetry sample configuration files"
msgstr ""
#: ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:7(para)
msgid "All the files in this section can be found in the <filename class=\"directory\">/etc/ceilometer/</filename> directory."
msgstr ""
#: ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:10(title)
msgid "ceilometer.conf"
msgstr ""
#: ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:11(para)
msgid "The configuration for the Telemetry services and agents is found in the <filename>ceilometer.conf</filename> file."
msgstr ""
#: ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:13(para) ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:14(para) ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:22(para)
msgid "This file must be modified after installation."
msgstr ""
#: ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:20(title)
msgid "event_definitions.yaml"
msgstr ""
#: ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:21(para)
msgid "The <filename>event_definitions.yaml</filename> file defines how events received from other OpenStack components should be translated to Telemetry samples."
msgstr ""
#: ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:24(para) ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:33(para) ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:29(para)
msgid "You should not need to modify this file."
msgstr ""
#: ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:28(title)
msgid "pipeline.yaml"
msgstr ""
#: ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:29(para)
msgid "Pipelines describe a coupling between sources of samples and the corresponding sinks for transformation and publication of the data. They are defined in the <filename>pipeline.yaml</filename> file."
msgstr ""
#: ./doc/config-reference/telemetry/section_telemetry-sample-configuration-files.xml:38(para)
msgid "The <filename>policy.json</filename> file defines additional access controls that apply to the Telemetry service."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:7(title)
msgid "Identity service sample configuration files"
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:8(para)
msgid "You can find the files described in this section in the <systemitem>/etc/keystone</systemitem> directory."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:11(title)
msgid "keystone.conf"
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:12(para)
msgid "Use the <filename>keystone.conf</filename> file to configure most Identity service options:"
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:18(title)
msgid "keystone-paste.ini"
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:19(para)
msgid "Use the <filename>keystone-paste.ini</filename> file to configure the Web Service Gateway Interface (WSGI) middleware pipeline for the Identity service."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:26(title)
msgid "logging.conf"
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:27(para)
msgid "You can specify a special logging configuration file in the <filename>keystone.conf</filename> configuration file. For example, <filename>/etc/keystone/logging.conf</filename>."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:31(para)
msgid "For details, see the (<link href=\"http://docs.python.org/2/howto/logging.html#configuring-logging\">Python logging module documentation</link>)."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:39(para)
msgid "Use the <filename>policy.json</filename> file to define additional access controls that apply to the Identity service."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:45(title)
msgid "Domain-specific configuration"
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:46(para)
msgid "Identity enables you to configure domain-specific authentication drivers. For example, you can configure a domain to have its own LDAP or SQL server."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:49(para)
msgid "By default, the option to configure domain-specific drivers is disabled."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:51(para)
msgid "To enable domain-specific drivers, set these options in <literal>[identity]</literal> section in the <filename>keystone.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:57(para)
msgid "When you enable domain-specific drivers, Identity looks in the <option>domain_config_dir</option> directory for configuration files that are named as follows: <filename>keystone.<replaceable>DOMAIN_NAME</replaceable>.conf</filename>, where <replaceable>DOMAIN_NAME</replaceable> is the domain name."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-sample-conf-files.xml:63(para)
msgid "Any options that you define in the domain-specific configuration file override options in the primary configuration file for the specified domain. Any domain without a domain-specific configuration file uses only the options in the primary configuration file."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-cors.xml:7(title)
msgid "Cross-origin resource sharing"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-cors.xml:8(para)
msgid "Cross-Origin Resource Sharing (CORS) is a mechanism that allows code running in a browser (JavaScript for example) to make requests to a domain, other than the one it was originated from. OpenStack Object Storage supports CORS requests to containers and objects within the containers using metadata held on the container."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-cors.xml:12(para)
msgid "In addition to the metadata on containers, you can use the <option>cors_allow_origin</option> option in the <filename>proxy-server.conf</filename> file to set a list of hosts that are included with any CORS request by default."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:7(title)
msgid "Object Storage general service configuration"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:8(para)
msgid "Most Object Storage services fall into two categories, Object Storage's WSGI servers and background daemons."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:12(para)
msgid "Object Storage uses paste.deploy to manage server configurations. Read more at <link href=\"http://pythonpaste.org/deploy/\">http://pythonpaste.org/deploy/</link>."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:15(para)
msgid "Default configuration options are set in the `[DEFAULT]` section, and any options specified there can be overridden in any of the other sections when the syntax <literal>set option_name = value</literal> is in place."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:20(para)
msgid "Configuration for servers and daemons can be expressed together in the same file for each type of server, or separately. If a required section for the service trying to start is missing, there will be an error. Sections not used by the service are ignored."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:26(para)
msgid "Consider the example of an Object Storage node. By convention configuration for the <systemitem class=\"service\">object-server</systemitem>, <systemitem class=\"service\">object-updater</systemitem>, <systemitem class=\"service\">object-replicator</systemitem>, and <systemitem class=\"service\">object-auditor</systemitem> exist in a single file <filename>/etc/swift/object-server.conf</filename>:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:52(para)
msgid "Object Storage services expect a configuration path as the first argument:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:60(para)
msgid "If you omit the object-auditor section, this file cannot be used as the configuration path when starting the <placeholder-1/> daemon:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:68(para)
msgid "If the configuration path is a directory instead of a file, all of the files in the directory with the file extension \".conf\" will be combined to generate the configuration object which is delivered to the Object Storage service. This is referred to generally as \"directory-based configuration\"."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:75(para)
msgid "Directory-based configuration leverages ConfigParser's native multi-file support. Files ending in \".conf\" in the given directory are parsed in lexicographical order. File names starting with '.' are ignored. A mixture of file and directory configuration paths is not supported - if the configuration path is a file, only that file will be parsed."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:83(para)
msgid "The Object Storage service management tool <filename>swift-init</filename> has adopted the convention of looking for <filename>/etc/swift/{type}-server.conf.d/</filename> if the file <filename>/etc/swift/{type}-server.conf</filename> file does not exist."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:91(para)
msgid "When using directory-based configuration, if the same option under the same section appears more than once in different files, the last value parsed is said to override previous occurrences. You can ensure proper override precedence by prefixing the files in the configuration directory with numerical values, as in the following example file layout:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:110(para)
msgid "You can inspect the resulting combined configuration object using the <placeholder-1/> command-line tool."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml:114(para)
msgid "All the services of an Object Store deployment share a common configuration in the <literal>[swift-hash]</literal> section of the <filename>/etc/swift/swift.conf</filename> file. The <option>swift_hash_path_suffix</option> and <option>swift_hash_path_prefix</option> values must be identical on all the nodes."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:7(title)
msgid "Configure Object Storage features"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:9(title)
msgid "Object Storage zones"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:10(para)
msgid "In OpenStack Object Storage, data is placed across different tiers of failure domains. First, data is spread across regions, then zones, then servers, and finally across drives. Data is placed to get the highest failure domain isolation. If you deploy multiple regions, the Object Storage service places the data across the regions. Within a region, each replica of the data should be stored in unique zones, if possible. If there is only one zone, data should be placed on different servers. And if there is only one server, data should be placed on different drives."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:21(para)
msgid "Regions are widely separated installations with a high-latency or otherwise constrained network link between them. Zones are arbitrarily assigned, and it is up to the administrator of the Object Storage cluster to choose an isolation level and attempt to maintain the isolation level through appropriate zone assignment. For example, a zone may be defined as a rack with a single power source. Or a zone may be a DC room with a common utility provider. Servers are identified by a unique IP/port. Drives are locally attached storage volumes identified by mount point."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:32(para)
msgid "In small clusters (five nodes or fewer), everything is normally in a single zone. Larger Object Storage deployments may assign zone designations differently; for example, an entire cabinet or rack of servers may be designated as a single zone to maintain replica availability if the cabinet becomes unavailable (for example, due to failure of the top of rack switches or a dedicated circuit). In very large deployments, such as service provider level deployments, each zone might have an entirely autonomous switching and power infrastructure, so that even the loss of an electrical circuit or switching aggregator would result in the loss of a single replica at most."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:46(title)
msgid "Rackspace zone recommendations"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:47(para)
msgid "For ease of maintenance on OpenStack Object Storage, Rackspace recommends that you set up at least five nodes. Each node is assigned its own zone (for a total of five zones), which gives you host level redundancy. This enables you to take down a single zone for maintenance and still guarantee object availability in the event that another zone fails during your maintenance."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:55(para)
msgid "You could keep each server in its own cabinet to achieve cabinet level isolation, but you may wish to wait until your Object Storage service is better established before developing cabinet-level isolation. OpenStack Object Storage is flexible; if you later decide to change the isolation level, you can take down one zone at a time and move them to appropriate new homes."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:63(title)
msgid "RAID controller configuration"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:64(para)
msgid "OpenStack Object Storage does not require RAID. In fact, most RAID configurations cause significant performance degradation. The main reason for using a RAID controller is the battery-backed cache. It is very important for data integrity reasons that when the operating system confirms a write has been committed that the write has actually been committed to a persistent location. Most disks lie about hardware commits by default, instead writing to a faster write cache for performance reasons. In most cases, that write cache exists only in non-persistent memory. In the case of a loss of power, this data may never actually get committed to disk, resulting in discrepancies that the underlying file system must handle."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:77(para)
msgid "OpenStack Object Storage works best on the XFS file system, and this document assumes that the hardware being used is configured appropriately to be mounted with the <placeholder-1/> option. For more information, refer to the XFS FAQ: <link href=\"http://xfs.org/index.php/XFS_FAQ\">http://xfs.org/index.php/XFS_FAQ</link>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:85(para)
msgid "To get the most out of your hardware, it is essential that every disk used in OpenStack Object Storage is configured as a standalone, individual RAID 0 disk; in the case of 6 disks, you would have six RAID 0s or one JBOD. Some RAID controllers do not support JBOD or do not support battery backed cache with JBOD. To ensure the integrity of your data, you must ensure that the individual drive caches are disabled and the battery backed cache in your RAID card is configured and used. Failure to configure the controller properly in this case puts data at risk in the case of sudden loss of power."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:97(para)
msgid "You can also use hybrid drives or similar options for battery backed up cache configurations without a RAID controller."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:103(title)
msgid "Throttle resources through rate limits"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:104(para)
msgid "Rate limiting in OpenStack Object Storage is implemented as a pluggable middleware that you configure on the proxy server. Rate limiting is performed on requests that result in database writes to the account and container SQLite databases. It uses memcached and is dependent on the proxy servers having highly synchronized time. The rate limits are limited by the accuracy of the proxy server clocks."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:113(title)
msgid "Configure rate limiting"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:114(para)
msgid "All configuration is optional. If no account or container limits are provided, no rate limiting occurs. Available configuration options include:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:120(para)
msgid "The container rate limits are linearly interpolated from the values given. A sample container rate limiting could be:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:123(para)
msgid "container_ratelimit_100 = 100"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:124(para)
msgid "container_ratelimit_200 = 50"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:125(para)
msgid "container_ratelimit_500 = 20"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:126(para)
msgid "This would result in:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:128(caption)
msgid "Values for Rate Limiting with Sample Configuration Settings"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:132(td)
msgid "Container Size"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:133(td)
msgid "Rate Limit"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:136(td)
msgid "0-99"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:137(td)
msgid "No limiting"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:140(td) ./doc/config-reference/object-storage/section_object-storage-features.xml:141(td)
msgid "100"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:145(td)
msgid "150"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:146(td)
msgid "75"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:149(td)
msgid "500"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:150(td) ./doc/config-reference/object-storage/section_object-storage-features.xml:154(td)
msgid "20"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:153(td)
msgid "1000"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:161(title)
msgid "Health check"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:162(para)
msgid "Provides an easy way to monitor whether the Object Storage proxy server is alive. If you access the proxy with the path <filename>/healthcheck</filename>, it responds with <literal>OK</literal> in the response body, which monitoring tools can use."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:170(title)
msgid "Domain remap"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:171(para)
msgid "Middleware that translates container and account parts of a domain to path parameters that the proxy server understands."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:179(title)
msgid "CNAME lookup"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:180(para)
msgid "Middleware that translates an unknown domain in the host header to something that ends with the configured <code>storage_domain</code> by looking up the given domain's CNAME record in DNS."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:190(title)
msgid "Temporary URL"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:191(para)
msgid "Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in OpenStack Object Storage, but the Object Storage account has no public access. The website can generate a URL that provides GET access for a limited time to the resource. When the web browser user clicks on the link, the browser downloads the object directly from Object Storage, eliminating the need for the website to act as a proxy for the request. If the user shares the link with all his friends, or accidentally posts it on a forum, the direct access is limited to the expiration time set when the website created the link."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:202(literal)
msgid "temp_url_sig"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:204(para)
msgid "A cryptographic signature"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:208(literal)
msgid "temp_url_expires"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:210(para)
msgid "An expiration date, in Unix time"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:199(para)
msgid "A temporary URL is the typical URL associated with an object, with two additional query parameters:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:214(para)
msgid "An example of a temporary URL:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:220(para)
msgid "To create temporary URLs, first set the <literal>X-Account-Meta-Temp-URL-Key</literal> header on your Object Storage account to an arbitrary string. This string serves as a secret key. For example, to set a key of <literal>b3968d0207b54ece87cccc06515a89d4</literal> using the <placeholder-1/> command-line tool:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:225(replaceable)
msgid "b3968d0207b54ece87cccc06515a89d4"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:226(para)
msgid "Next, generate an HMAC-SHA1 (RFC 2104) signature to specify:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:230(para)
msgid "Which HTTP method to allow (typically <literal>GET</literal> or <literal>PUT</literal>)"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:235(para)
msgid "The expiry date as a Unix timestamp"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:238(para)
msgid "The full path to the object"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:241(para)
msgid "The secret key set as the <literal>X-Account-Meta-Temp-URL-Key</literal>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:245(para)
msgid "Here is code generating the signature for a GET for 24 hours on <code>/v1/AUTH_account/container/object</code>:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:260(para)
msgid "Any alteration of the resource path or query arguments results in a <errorcode>401</errorcode><errortext>Unauthorized</errortext> error. Similarly, a PUT where GET was the allowed method returns a <errorcode>401</errorcode>. HEAD is allowed if GET or PUT is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Object Storage."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:270(para)
msgid "Changing the <literal>X-Account-Meta-Temp-URL-Key</literal> invalidates any previously generated temporary URLs within 60 seconds (the memcache time for the key). Object Storage supports up to two keys, specified by <literal>X-Account-Meta-Temp-URL-Key</literal> and <literal>X-Account-Meta-Temp-URL-Key-2</literal>. Signatures are checked against both keys, if present. This is to allow for key rotation without invalidating all existing temporary URLs."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:282(para)
msgid "Object Storage includes a script called <placeholder-1/> that generates the query parameters automatically:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:289(para)
msgid "Because this command only returns the path, you must prefix the Object Storage host name (for example, <literal>https://swift-cluster.example.com</literal>)."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:292(para)
msgid "With GET Temporary URLs, a <literal>Content-Disposition</literal> header is set on the response so that browsers interpret this as a file attachment to be saved. The file name chosen is based on the object name, but you can override this with a <literal>filename</literal> query parameter. The following example specifies a filename of <filename>My Test File.pdf</filename>:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:315(emphasis)
msgid "tempurl"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:304(para)
msgid "To enable Temporary URL functionality, edit <filename>/etc/swift/proxy-server.conf</filename> to add <literal>tempurl</literal> to the <literal>pipeline</literal> variable defined in the <literal>[pipeline:main]</literal> section. The <literal>tempurl</literal> entry should appear immediately before the authentication filters in the pipeline, such as <literal>authtoken</literal>, <literal>tempauth</literal> or <literal>keystoneauth</literal>. For example:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:322(title)
msgid "Name check filter"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:323(para)
msgid "Name Check is a filter that disallows any paths that contain defined forbidden characters or that exceed a defined length."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:331(title)
msgid "Constraints"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:332(para)
msgid "To change the OpenStack Object Storage internal limits, update the values in the <literal>swift-constraints</literal> section in the <filename>swift.conf</filename> file. Use caution when you update these values because they affect the performance in the entire cluster."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:343(title)
msgid "Cluster health"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:344(para)
msgid "Use the <placeholder-1/> tool to measure overall cluster health. This tool checks if a set of deliberately distributed containers and objects are currently in their proper places within the cluster. For instance, a common deployment has three replicas of each object. The health of that object can be measured by checking if each replica is in its proper place. If only 2 of the 3 is in place the objects health can be said to be at 66.66%, where 100% would be perfect. A single objects health, especially an older object, usually reflects the health of that entire partition the object is in. If you make enough objects on a distinct percentage of the partitions in the cluster,you get a good estimate of the overall cluster health. In practice, about 1% partition coverage seems to balance well between accuracy and the amount of time it takes to gather results. The first thing that needs to be done to provide this health value is create a new account solely for this usage. Next, you need to place the containers and objects throughout the system so that they are on distinct partitions. The <placeholder-2/> tool does this by making up random container and object names until they fall on distinct partitions. Last, and repeatedly for the life of the cluster, you must run the <placeholder-3/> tool to check the health of each of these containers and objects. These tools need direct access to the entire cluster and to the ring files (installing them on a proxy server suffices). The <placeholder-4/> and <placeholder-5/> commands both use the same configuration file, <filename>/etc/swift/dispersion.conf</filename>. Example <filename>dispersion.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:385(para)
msgid "There are also configuration options for specifying the dispersion coverage, which defaults to 1%, retries, concurrency, and so on. However, the defaults are usually fine. Once the configuration is in place, run <placeholder-1/> to populate the containers and objects throughout the cluster. Now that those containers and objects are in place, you can run <placeholder-2/> to get a dispersion report, or the overall health of the cluster. Here is an example of a cluster in perfect health:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:405(para)
msgid "Now, deliberately double the weight of a device in the object ring (with replication turned off) and re-run the dispersion report to show what impact that has:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:421(para)
msgid "You can see the health of the objects in the cluster has gone down significantly. Of course, this test environment has just four devices, in a production environment with many devices the impact of one device change is much less. Next, run the replicators to get everything put back into place and then rerun the dispersion report:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:438(para)
msgid "Alternatively, the dispersion report can also be output in JSON format. This allows it to be more easily consumed by third-party utilities:"
msgstr ""
#. Usage documented in http://docs.openstack.org/developer/swift/overview_large_objects.html
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:452(title)
msgid "Static Large Object (SLO) support"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:453(para)
msgid "This feature is very similar to Dynamic Large Object (DLO) support in that it enables the user to upload many objects concurrently and afterwards download them as a single object. It is different in that it does not rely on eventually consistent container listings to do so. Instead, a user-defined manifest of the object segments is used."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:465(title)
msgid "Container quotas"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:466(para)
msgid "The <code>container_quotas</code> middleware implements simple quotas that can be imposed on Object Storage containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:471(para)
msgid "Any object PUT operations that exceed these quotas return a 403 response (forbidden)."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:473(para)
msgid "Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second TTL by default), and it is unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers are refused)."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:479(para)
msgid "Set quotas by adding meta values to the container. These values are validated when you set them:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:483(para)
msgid "X-Container-Meta-Quota-Bytes: Maximum size of the container, in bytes."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:487(para)
msgid "X-Container-Meta-Quota-Count: Maximum object count of the container."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:496(title)
msgid "Account quotas"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:497(para)
msgid "The <parameter>x-account-meta-quota-bytes</parameter> metadata entry must be requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:501(para)
msgid "The <parameter>x-account-meta-quota-bytes</parameter> metadata entry must be set to store and enable the quota. Write requests to this metadata entry are only permitted for resellers. There is no account quota limitation on a reseller account even if <parameter>x-account-meta-quota-bytes</parameter> is set."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:508(para)
msgid "Any object PUT operations that exceed the quota return a 413 response (request entity too large) with a descriptive body."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:511(para)
msgid "The following command uses an admin account that own the Reseller role to set a quota on the test account:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:515(para)
msgid "Here is the stat listing of an account where quota has been set:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:525(para)
msgid "This command removes the account quota:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:529(title)
msgid "Bulk delete"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:530(para)
msgid "Use <code>bulk-delete</code> to delete multiple files from an account with a single request. Responds to DELETE requests with a header 'X-Bulk-Delete: true_value'. The body of the DELETE request is a new line-separated list of files to delete. The files listed must be URL encoded and in the form:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:540(para)
msgid "If all files are successfully deleted (or did not exist), the operation returns <code>HTTPOk</code>. If any files failed to delete, the operation returns <code>HTTPBadGateway</code>. In both cases, the response body is a JSON dictionary that shows the number of files that were successfully deleted or not found. The files that failed are listed."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:553(title)
msgid "Drive audit"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:554(para)
msgid "The <option>swift-drive-audit</option> configuration items reference a script that can be run by using <placeholder-1/> to watch for bad drives. If errors are detected, it unmounts the bad drive, so that OpenStack Object Storage can work around it. It takes the following options:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:565(title)
msgid "Form post"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:566(para)
msgid "Middleware that provides the ability to upload objects to a cluster using an HTML form POST. The format of the form is:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:581(para)
msgid "The <literal>swift-url</literal> is the URL to the Object Storage destination, such as: <uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri> The name of each file uploaded is appended to the specified <literal>swift-url</literal>. So, you can upload directly to the root of container with a URL like: <uri>https://swift-cluster.example.com/v1/AUTH_account/container/</uri> Optionally, you can include an object prefix to better separate different users uploads, such as: <uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:591(para)
msgid "The form method must be POST and the enctype must be set as <literal>multipart/form-data</literal>."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:594(para)
msgid "The redirect attribute is the URL to redirect the browser to after the upload completes. The URL has status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as <literal>“max_file_size exceeded”</literal>)."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:601(para)
msgid "The <literal>max_file_size</literal> attribute must be included and indicates the largest single file upload that can be done, in bytes."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:604(para)
msgid "The <literal>max_file_count</literal> attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <code>&lt;![CDATA[&lt;input type=\"file\" name=\"filexx\"/&gt;]]&gt;</code> attributes if desired."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:610(para)
msgid "The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:613(para)
msgid "The signature attribute is the HMAC-SHA1 signature of the form. This sample Python code shows how to compute the signature:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:630(para)
msgid "The key is the value of the <literal>X-Account-Meta-Temp-URL-Key</literal> header on the account."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:633(para)
msgid "Be certain to use the full path, from the <literal>/v1/</literal> onward."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:635(para)
msgid "The command-line tool <placeholder-1/> may be used (mostly just when testing) to compute expires and signature."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:639(para)
msgid "The file attributes must appear after the other attributes to be processed correctly. If attributes come after the file, they are not sent with the sub-request because on the server side, all attributes in the file cannot be parsed unless the whole file is read into memory and the server does not have enough memory to service these requests. So, attributes that follow the file are ignored."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:652(title)
msgid "Static web sites"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml:653(para)
msgid "When configured, this middleware serves container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests."
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:7(title)
msgid "Configure Object Storage with the S3 API"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:8(para)
msgid "The Swift3 middleware emulates the S3 REST API on top of Object Storage."
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:10(para)
msgid "The following operations are currently supported:"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:13(para)
msgid "GET Service"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:16(para)
msgid "DELETE Bucket"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:19(para)
msgid "GET Bucket (List Objects)"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:22(para)
msgid "PUT Bucket"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:25(para)
msgid "DELETE Object"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:28(para)
msgid "GET Object"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:31(para)
msgid "HEAD Object"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:34(para)
msgid "PUT Object"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:37(para)
msgid "PUT Object (Copy)"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:40(para)
msgid "To use this middleware, first download the latest version from its repository to your proxy server(s)."
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:44(para)
msgid "Then, install it using standard python mechanisms, such as:"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:47(para)
msgid "Alternatively, if you have configured the Ubuntu Cloud Archive, you may use: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:50(para)
msgid "To add this middleware to your configuration, add the <systemitem>swift3</systemitem> middleware in front of the <systemitem>swauth</systemitem> middleware, and before any other middleware that looks at Object Storage requests (like rate limiting)."
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:53(para)
msgid "Ensure that your <filename>proxy-server.conf</filename> file contains <systemitem>swift3</systemitem> in the pipeline and the <code>[filter:swift3]</code> section, as shown below:"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:61(para)
msgid "Next, configure the tool that you use to connect to the S3 API. For S3curl, for example, you must add your host IP information by adding your host IP to the @endpoints array (line 33 in s3curl.pl):"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:66(para)
msgid "Now you can send commands to the endpoint, such as:"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml:69(para)
msgid "To set up your client, ensure you are using the ec2 credentials, which can be downloaded from the <guilabel>API Endpoints</guilabel> tab of the dashboard. The host should also point to the Object Storage node's hostname. It also will have to use the old-style calling format, and not the hostname-based container format. Here is an example client setup using the Python boto library on a locally installed all-in-one Object Storage installation."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:7(title)
msgid "Endpoint listing middleware"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:8(para)
msgid "The endpoint listing middleware enables third-party services that use data locality information to integrate with OpenStack Object Storage. This middleware reduces network overhead and is designed for third-party services that run inside the firewall. Deploy this middleware on a proxy server because usage of this middleware is not authenticated."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:12(para)
msgid "Format requests for endpoints, as follows:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:13(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:14(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:15(replaceable)
msgid "{account}"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:13(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:14(replaceable)
msgid "{container}"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:13(replaceable)
msgid "{object}"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:16(para)
msgid "Use the <option>list_endpoints_path</option> configuration option in the <filename>proxy_server.conf</filename> file to customize the <literal>/endpoints/</literal> path."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:19(para)
msgid "Responses are JSON-encoded lists of endpoints, as follows:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:21(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:22(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:23(replaceable)
msgid "{server}"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:21(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:22(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:23(replaceable)
msgid "{port}"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:21(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:22(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:23(replaceable)
msgid "{dev}"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:21(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:22(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:23(replaceable)
msgid "{part}"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:21(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:22(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:23(replaceable)
msgid "{acc}"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:21(replaceable) ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:22(replaceable)
msgid "{cont}"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:21(replaceable)
msgid "{obj}"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-listendpoints.xml:24(para)
msgid "An example response is:"
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-sample-configuration-files.xml:7(title) ./doc/config-reference/compute/section_compute-sample-configuration-files.xml:6(title)
msgid "Additional sample configuration files"
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-sample-configuration-files.xml:8(para)
msgid "Find the following files in <systemitem>/etc/openstack-dashboard</systemitem>."
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-sample-configuration-files.xml:10(title)
msgid "keystone_policy.json"
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-sample-configuration-files.xml:11(para)
msgid "The <filename>keystone_policy.json</filename> file defines additional access controls for the dashboard that apply to the Identity service."
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-sample-configuration-files.xml:15(para)
msgid "The <filename>keystone_policy.json</filename> file must match the Identity service <filename>/etc/keystone/policy.json</filename> policy file."
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-sample-configuration-files.xml:23(title)
msgid "nova_policy.json"
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-sample-configuration-files.xml:24(para)
msgid "The <filename>nova_policy.json</filename> file defines additional access controls for the dashboard that apply to the Compute service."
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-sample-configuration-files.xml:28(para)
msgid "The <filename>nova_policy.json</filename> file must match the Compute <filename>/etc/nova/policy.json</filename> policy file."
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-log-files.xml:7(title)
msgid "Dashboard log files"
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-log-files.xml:8(para)
msgid "The dashboard is served to users through the Apache web server (<systemitem>httpd</systemitem>)."
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-log-files.xml:10(para)
msgid "As a result, dashboard-related logs appear in files in the <filename>/var/log/httpd</filename> or <filename>/var/log/apache2</filename> directory on the system where the dashboard is hosted. The following table describes these files:"
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-log-files.xml:16(caption)
msgid "Dashboard/httpd log files"
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-log-files.xml:22(td) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:278(th) ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:80(th) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:309(td) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:206(th) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:371(th) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:378(td) ./doc/config-reference/compute/section_compute-scheduler.xml:771(th) ./doc/config-reference/compute/section_compute-scheduler.xml:875(th)
msgid "Description"
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-log-files.xml:27(filename)
msgid "access_log"
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-log-files.xml:28(td)
msgid "Logs all attempts to access the web server."
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-log-files.xml:31(filename)
msgid "error_log"
msgstr ""
#: ./doc/config-reference/dashboard/section_dashboard-log-files.xml:32(td)
msgid "Logs all unsuccessful attempts to access the web server, along with the reason that each attempt failed."
msgstr ""
#: ./doc/config-reference/block-storage/section_misc.xml:7(title)
msgid "Additional options"
msgstr ""
#: ./doc/config-reference/block-storage/section_misc.xml:9(para)
msgid "These options can also be set in the <filename>cinder.conf</filename> file."
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:7(title)
msgid "Log files used by Block Storage"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:8(para)
msgid "The corresponding log file of each Block Storage service is stored in the <filename>/var/log/cinder/</filename> directory of the host on which each service runs."
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:12(caption)
msgid "Log files used by Block Storage services"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:21(td)
msgid "Service/interface (for CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, and SUSE Linux Enterprise)"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:25(td)
msgid "Service/interface (for Ubuntu and Debian)"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:33(filename) ./doc/config-reference/compute/section_nova-log-files.xml:33(filename)
msgid "api.log"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:36(systemitem)
msgid "openstack-cinder-api"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:39(systemitem)
msgid "cinder-api"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:44(filename)
msgid "cinder-manage.log"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:47(systemitem) ./doc/config-reference/block-storage/section_cinder-log-files.xml:50(systemitem)
msgid "cinder-manage"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:55(filename) ./doc/config-reference/compute/section_nova-log-files.xml:120(filename)
msgid "scheduler.log"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:58(systemitem)
msgid "openstack-cinder-scheduler"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:61(systemitem)
msgid "cinder-scheduler"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:66(filename)
msgid "volume.log"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:69(systemitem)
msgid "openstack-cinder-volume"
msgstr ""
#: ./doc/config-reference/block-storage/section_cinder-log-files.xml:72(systemitem) ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:179(systemitem)
msgid "cinder-volume"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:11(title)
msgid "Volume encryption with static key"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:12(para)
msgid "This is an implementation of a key manager that reads its key from the project's configuration options."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:14(para)
msgid "This key manager implementation provides limited security, assuming that the key remains secret. Volume encryption provides protection against a lost or stolen disk, assuming that the configuration file that contains the key is not stored on the disk. Encryption also protects the confidentiality of data as it is transmitted via iSCSI from the compute host to the storage host as long as an attacker who intercepts the data does not know the secret key."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:21(para)
msgid "Because this implementation uses a single, fixed key, it does not provide protection if that key is compromised. In particular, different volumes encrypted with a key provided by this key manager actually share the same encryption key so <emphasis>any</emphasis> volume can be decrypted once the fixed key is known."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:26(para)
msgid "Updates are in the pipeline which will provide true key manager support via the key management service. This will provide much better security once complete."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:29(title)
msgid "Initial configuration"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:31(para)
msgid "Configuration changes need to be made to any nodes running the <systemitem class=\"service\">cinder-volume</systemitem> or <systemitem class=\"service\">nova-compute</systemitem> services."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:33(para)
msgid "Update <systemitem class=\"service\">cinder-volume</systemitem> servers:"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:35(para)
msgid "Edit the <filename>/etc/cinder/cinder.conf</filename> file and add or update the value of the option <option>fixed_key</option> in the <literal>[keymgr]</literal> section:"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:43(para)
msgid "Restart <systemitem class=\"service\">cinder-volume</systemitem>."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:47(para)
msgid "Update <systemitem class=\"service\">nova-compute</systemitem> servers:"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:49(para)
msgid "Edit the <filename>/etc/nova/nova.conf</filename> file and add or update the value of the option <option>fixed_key</option> in the <literal>[keymgr]</literal> section (add a keymgr section as shown if needed):"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:57(para)
msgid "Restart <systemitem class=\"service\">nova-compute</systemitem>."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:62(title)
msgid "Create encrypted volume type"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:63(para)
msgid "Block Storage volume type assignment provides a mechanism to provide scheduling to a specific back-end, and also can be used to specify specific information for a back-end storage device to act upon."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:65(para)
msgid "In this case we are creating a volume type called LUKS and providing configuration information that will tell the storage system to encrypt or decrypt the volume."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:69(para) ./doc/config-reference/block-storage/section_volume-encryption.xml:101(para)
msgid "Source your admin credentials:"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:73(para)
msgid "Create the volume type:"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:82(para)
msgid "Mark the volume type as encrypted and provide the necessary details:"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:92(para)
msgid "Support for creating the volume type in the OpenStack dashboard (horizon) exists today, however support for tagging the type as encrypted and providing the additional information needed is still in review."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:96(title)
msgid "Create an encrypted volume"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:97(para)
msgid "Use the OpenStack dashboard (horizon), or the <placeholder-1/> command to create volumes just as you normally would. For an encrypted volume use the LUKS tag, for unencrypted leave the LUKS tag off."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:105(para)
msgid "Create an unencrypted 1GB test volume:"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:132(para)
msgid "Create an encrypted 1GB test volume:"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:159(para)
msgid "Notice the encrypted parameter; it will show True/False. The option <option>volume_type</option> is also shown for easy review."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:163(title)
msgid "Testing volume encryption"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:164(para)
msgid "This is a simple test scenario to help validate your encryption. It assumes an LVM based Block Storage server."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:165(para)
msgid "Perform these steps after completing the volume encryption setup and creating the volume-type for LUKS as described in the preceding sections."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:168(para)
msgid "Create a VM:"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:172(para)
msgid "Create two volumes, one encrypted and one not encrypted then attach them to your VM:"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:186(para)
msgid "On the VM, send some text to the newly attached volumes and synchronize them:"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:193(para)
msgid "On the system hosting cinder volume services, synchronize to flush the I/O cache then test to see if your strings can be found:"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-encryption.xml:200(para)
msgid "In the above example you see that the search returns the string written to the unencrypted volume, but not the encrypted one."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:6(title)
msgid "Fibre Channel Zone Manager"
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:7(para)
msgid "The Fibre Channel Zone Manager allows FC SAN Zone/Access control management in conjunction with Fibre Channel block storage. The configuration of Fibre Channel Zone Manager and various zone drivers are described in this section."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:12(title)
msgid "Configure Block Storage to use Fibre Channel Zone Manager"
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:14(para)
msgid "If Block Storage is configured to use a Fibre Channel volume driver that supports Zone Manager, update <filename>cinder.conf</filename> to add the following configuration options to enable Fibre Channel Zone Manager."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:19(para) ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:299(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:351(para)
msgid "Make the following changes in the <filename>/etc/cinder/cinder.conf</filename> file."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:23(para)
msgid "To use different Fibre Channel Zone Drivers, use the parameters described in this section."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:26(para)
msgid "When multi backend configuration is used, provide the <option>zoning_mode</option> configuration option as part of the volume driver configuration where <option>volume_driver</option> option is specified."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:32(para)
msgid "Default value of <option>zoning_mode</option> is <literal>None</literal> and this needs to be changed to <literal>fabric</literal> to allow fabric zoning."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:37(para)
msgid "<option>zoning_policy</option> can be configured as <literal>initiator-target</literal> or <literal>initiator</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:44(title)
msgid "Brocade Fibre Channel Zone Driver"
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:45(para)
msgid "Brocade Fibre Channel Zone Driver performs zoning operations via SSH. Configure Brocade Zone Driver and lookup service by specifying the following parameters:"
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:49(para) ./doc/config-reference/block-storage/section_fc-zoning.xml:78(para)
msgid "Configure SAN fabric parameters in the form of fabric groups as described in the example below:"
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:53(para) ./doc/config-reference/block-storage/section_fc-zoning.xml:82(para)
msgid "Define a fabric group for each fabric using the fabric names used in <option>fc_fabric_names</option> configuration option as group name."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:59(title) ./doc/config-reference/block-storage/section_fc-zoning.xml:92(title) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:15(title) ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:16(title) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:42(title) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:13(title) ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:13(title) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:23(title) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:25(title) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:16(title)
msgid "System requirements"
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:60(para)
msgid "Brocade Fibre Channel Zone Driver requires firmware version FOS v6.4 or higher."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:62(para)
msgid "As a best practice for zone management, use a user account with <literal>zoneadmin</literal> role. Users with <literal>admin</literal> role (including the default <literal>admin</literal> user account) are limited to a maximum of two concurrent SSH sessions."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:68(para)
msgid "For information about how to manage Brocade Fibre Channel switches, see the Brocade Fabric OS user documentation."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:73(title)
msgid "Cisco Fibre Channel Zone Driver"
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:74(para)
msgid "Cisco Fibre Channel Zone Driver performs zoning operations via SSH. Configure Cisco Zone Driver and lookup service by specifying the following parameters:"
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:86(para)
msgid "The Cisco Fibre Channel Zone Driver supports basic and enhanced zoning modes.The zoning VSAN must exist with an active zone set name which is same as the <option>fc_fabric_names</option> parameter."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:93(para)
msgid "Cisco MDS 9000 Family Switches."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:94(para)
msgid "Cisco MDS NX-OS Release 6.2(9) or later."
msgstr ""
#: ./doc/config-reference/block-storage/section_fc-zoning.xml:95(para)
msgid "For information about how to manage Cisco Fibre Channel switches, see the Cisco MDS 9000 user documentation."
msgstr ""
#: ./doc/config-reference/block-storage/section_backup-drivers.xml:7(title)
msgid "Backup drivers"
msgstr ""
#: ./doc/config-reference/block-storage/section_backup-drivers.xml:8(para)
msgid "This section describes how to configure the <systemitem class=\"service\">cinder-backup</systemitem> service and its drivers."
msgstr ""
#: ./doc/config-reference/block-storage/section_backup-drivers.xml:11(para)
msgid "The volume drivers are included with the Block Storage repository (<link href=\"https://github.com/openstack/cinder\">https://github.com/openstack/cinder</link>). To set a backup driver, use the <literal>backup_driver</literal> flag. By default there is no backup driver enabled."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:7(title)
msgid "Introduction to the Block Storage service"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:8(para)
msgid "The OpenStack Block Storage service provides persistent block storage resources that OpenStack Compute instances can consume. This includes secondary attached storage similar to the Amazon Elastic Block Storage (EBS) offering. In addition, you can write images to a Block Storage device for Compute to use as a bootable persistent instance."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:12(para)
msgid "The Block Storage service differs slightly from the Amazon EBS offering. The Block Storage service does not provide a shared storage solution like NFS. With the Block Storage service, you can attach a device to only one instance."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:15(para)
msgid "The Block Storage service provides:"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:18(para)
msgid "<systemitem class=\"service\">cinder-api</systemitem>. A WSGI app that authenticates and routes requests throughout the Block Storage service. It supports the OpenStack APIs only, although there is a translation that can be done through Compute's EC2 interface, which calls in to the Block Storage client."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:24(para)
msgid "<systemitem class=\"service\">cinder-scheduler</systemitem>. Schedules and routes requests to the appropriate volume service. Depending upon your configuration, this may be simple round-robin scheduling to the running volume services, or it can be more sophisticated through the use of the Filter Scheduler. The Filter Scheduler is the default and enables filters on things like Capacity, Availability Zone, Volume Types, and Capabilities as well as custom filters."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:32(para)
msgid "<systemitem class=\"service\">cinder-volume</systemitem>. Manages Block Storage devices, specifically the back-end devices themselves."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:37(para)
msgid "<systemitem class=\"service\">cinder-backup</systemitem>. Provides a means to back up a Block Storage volume to OpenStack Object Storage (swift)."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:42(para)
msgid "The Block Storage service contains the following components:"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:45(para)
msgid "<emphasis role=\"bold\">Back-end Storage Devices</emphasis>. The Block Storage service requires some form of back-end storage that the service is built on. The default implementation is to use LVM on a local volume group named \"cinder-volumes.\" In addition to the base driver implementation, the Block Storage service also provides the means to add support for other storage devices to be utilized such as external Raid Arrays or other storage appliances. These back-end storage devices may have custom block sizes when using KVM or QEMU as the hypervisor."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:54(para)
msgid "<emphasis role=\"bold\">Users and Tenants (Projects)</emphasis>. The Block Storage service can be used by many different cloud computing consumers or customers (tenants on a shared system), using role-based access assignments. Roles control the actions that a user is allowed to perform. In the default configuration, most actions do not require a particular role, but this can be configured by the system administrator in the appropriate <filename>policy.json</filename> file that maintains the rules. A user's access to particular volumes is limited by tenant, but the user name and password are assigned per user. Key pairs granting access to a volume are enabled per user, but quotas to control resource consumption across available hardware resources are per tenant."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:64(para)
msgid "For tenants, quota controls are available to limit:"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:68(para)
msgid "The number of volumes that can be created."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:72(para)
msgid "The number of snapshots that can be created."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:76(para)
msgid "The total number of GBs allowed per tenant (shared between snapshots and volumes)."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:80(para)
msgid "You can revise the default quota values with the Block Storage CLI, so the limits placed by quotas are editable by admin users."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:84(para)
msgid "<emphasis role=\"bold\">Volumes, Snapshots, and Backups</emphasis>. The basic resources offered by the Block Storage service are volumes and snapshots which are derived from volumes and volume backups:"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:89(para)
msgid "<emphasis role=\"bold\">Volumes</emphasis>. Allocated block storage resources that can be attached to instances as secondary storage or they can be used as the root store to boot instances. Volumes are persistent R/W block storage devices most commonly attached to the compute node through iSCSI."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:98(para)
msgid "<emphasis role=\"bold\">Snapshots</emphasis>. A read-only point in time copy of a volume. The snapshot can be created from a volume that is currently in use (through the use of <parameter>--force True</parameter>) or in an available state. The snapshot can then be used to create a new volume through create from snapshot."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml:105(para)
msgid "<emphasis role=\"bold\">Backups</emphasis>. An archived copy of a volume currently stored in OpenStack Object Storage (swift)."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-drivers.xml:7(title)
msgid "Volume drivers"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-drivers.xml:8(para)
msgid "To use different volume drivers for the <systemitem class=\"service\">cinder-volume</systemitem> service, use the parameters described in these sections."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-drivers.xml:11(para)
msgid "The volume drivers are included in the Block Storage repository (<link href=\"https://github.com/openstack/cinder\">https://github.com/openstack/cinder</link>). To set a volume driver, use the <literal>volume_driver</literal> flag. The default is:"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-sample-configuration-files.xml:6(title)
msgid "Block Storage sample configuration files"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-sample-configuration-files.xml:7(para)
msgid "All the files in this section can be found in <systemitem>/etc/cinder</systemitem>."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-sample-configuration-files.xml:9(title) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:386(filename)
msgid "cinder.conf"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-sample-configuration-files.xml:10(para)
msgid "The <filename>cinder.conf</filename> file is installed in <filename>/etc/cinder</filename> by default. When you manually install the Block Storage service, the options in the <filename>cinder.conf</filename> file are set to default values."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-sample-configuration-files.xml:14(para)
msgid "The <filename>cinder.conf</filename> file contains most of the options to configure the Block Storage service."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-sample-configuration-files.xml:22(para)
msgid "Use the <filename>api-paste.ini</filename> file to configure the Block Storage API service."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-sample-configuration-files.xml:29(para)
msgid "The <filename>policy.json</filename> file defines additional access controls that apply to the Block Storage service."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-sample-configuration-files.xml:34(para)
msgid "The <filename>rootwrap.conf</filename> file defines configuration values used by the <placeholder-1/> script when the Block Storage service must escalate its privileges to those of the root user."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:6(title)
msgid "Huawei storage driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:7(para)
msgid "The Huawei driver supports the iSCSI and Fibre Channel connections and enables OceanStor T series unified storage, OceanStor Dorado high-performance storage, and OceanStor HVS high-end storage to provide block storage services for OpenStack."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:11(title) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:24(title) ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:42(title) ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:30(title) ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:93(title) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:66(title) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:316(title) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:39(title) ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:56(title) ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:18(title) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:40(title) ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:10(title) ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:21(title) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:16(title) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:35(title) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:231(title) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:24(title)
msgid "Supported operations"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:12(para)
msgid "OceanStor T series unified storage supports these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:17(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:40(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:58(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:72(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:27(para) ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:45(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:69(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:319(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:43(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:67(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:88(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:106(para) ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:21(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:43(para) ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:12(para) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:18(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:39(para) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:236(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:31(para)
msgid "Create, delete, attach, and detach volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:20(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:43(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:75(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:30(para) ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:48(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:72(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:322(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:46(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:70(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:91(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:109(para) ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:24(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:46(para) ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:15(para) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:21(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:42(para) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:245(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:34(para)
msgid "Create, list, and delete volume snapshots."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:23(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:84(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:33(para) ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:51(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:75(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:325(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:49(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:73(para) ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:65(para) ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:27(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:49(para) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:24(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:60(para) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:252(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:37(para)
msgid "Create a volume from a snapshot."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:26(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:46(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:61(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:78(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:36(para) ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:54(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:78(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:328(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:52(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:76(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:94(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:112(para) ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:71(para) ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:30(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:52(para) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:27(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:45(para) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:255(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:40(para)
msgid "Copy an image to a volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:29(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:49(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:64(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:81(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:39(para) ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:57(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:81(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:331(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:55(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:79(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:97(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:115(para) ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:68(para) ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:33(para) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:30(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:48(para) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:268(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:43(para)
msgid "Copy a volume to an image."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:32(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:87(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:42(para) ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:60(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:84(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:58(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:82(para) ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:74(para) ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:36(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:55(para) ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:18(para) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:33(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:51(para) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:283(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:46(para)
msgid "Clone a volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:35(para)
msgid "OceanStor Dorado5100 supports these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:52(para)
msgid "OceanStor Dorado2100 G2 supports these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:67(para)
msgid "OceanStor HVS supports these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:92(title)
msgid "Configure Cinder nodes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:93(para)
msgid "In <filename>/etc/cinder</filename>, create the driver configuration file named <filename>cinder_huawei_conf.xml</filename>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:96(para)
msgid "You must configure <option>Product</option> and <option>Protocol</option> to specify a storage system and link type. The following uses the iSCSI driver as an example. The driver configuration file of OceanStor T series unified storage is shown as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:127(para)
msgid "The driver configuration file of OceanStor Dorado5100 is shown as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:153(para)
msgid "The driver configuration file of OceanStor Dorado2100 G2 is shown as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:177(para)
msgid "The driver configuration file of OceanStor HVS is shown as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:203(para)
msgid "You do not need to configure the iSCSI target IP address for the Fibre Channel driver. In the prior example, delete the iSCSI configuration:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:213(para)
msgid "To add <option>volume_driver</option> and <option>cinder_huawei_conf_file</option> items, you can modify the <filename>cinder.conf</filename> configuration file as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:219(para)
msgid "You can configure multiple Huawei back-end storages as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:230(para)
msgid "OceanStor HVS storage system supports the QoS function. You must create a QoS policy for the HVS storage system and create the volume type to enable QoS as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:239(para)
msgid "<option>OpenStack_QoS_high</option> is a QoS policy created by a user for the HVS storage system. <option>QoS_high</option> is the self-defined volume type. Set the <option>io_priority</option> option to <literal>high</literal>, <literal>normal</literal>, or <literal>low</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:247(para)
msgid "OceanStor HVS storage system supports the SmartTier function. SmartTier has three tiers. You can create the volume type to enable SmartTier as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:255(para)
msgid "<option>distribute_policy</option> and <option>transfer_strategy</option> can only be set to <literal>high</literal>, <literal>normal</literal>, or <literal>low</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:262(title)
msgid "Configuration file details"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:263(para)
msgid "This table describes the Huawei storage driver configuration options:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:266(caption)
msgid "Huawei storage driver configuration options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:275(th) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:203(th)
msgid "Flag name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:276(th) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:307(td) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:204(th) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:376(td)
msgid "Type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:277(th) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:308(td) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:205(th) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:377(td)
msgid "Default"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:284(option)
msgid "Product"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:287(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:301(td) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:312(td) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:324(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:336(td) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:348(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:363(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:468(td) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:316(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:324(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:381(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:393(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:410(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:421(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:213(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:228(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:266(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:385(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:414(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:423(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:448(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:464(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:475(para)
msgid "Required"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:293(para)
msgid "Type of a storage product. Valid values are <literal>T</literal>, <literal>Dorado</literal>, or <literal>HVS</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:300(option)
msgid "Protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:306(literal) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:416(para)
msgid "iSCSI"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:307(literal)
msgid "FC"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:305(td)
msgid "Type of a protocol. Valid values are <placeholder-1/> or <placeholder-2/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:311(option)
msgid "ControllerIP0"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:316(td)
msgid "IP address of the primary controller (not required for the HVS)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:321(option)
msgid "ControllerIP1"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:330(para)
msgid "IP address of the secondary controller (not required for the HVS)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:335(option)
msgid "HVSURL"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:340(td)
msgid "Access address of the Rest port (required only for the HVS)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:345(option)
msgid "UserName"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:354(para)
msgid "User name of an administrator"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:359(option)
msgid "UserPassword"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:369(para)
msgid "Password of an administrator"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:374(option)
msgid "LUNType"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:377(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:394(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:412(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:430(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:443(td) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:458(td) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:480(td) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:491(td) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:501(td) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:512(td) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:520(td) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:332(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:344(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:354(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:366(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:432(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:443(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:221(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:275(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:299(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:310(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:334(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:345(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:366(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:384(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:394(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:415(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:426(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:436(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:452(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:394(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:404(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:434(para)
msgid "Optional"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:380(para)
msgid "Thin"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:383(para)
msgid "Type of a created LUN. Valid values are <literal>Thick</literal> or <literal>Thin</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:390(option)
msgid "StripUnitSize"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:397(para)
msgid "64"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:400(para)
msgid "Stripe depth of a created LUN. The value is expressed in KB."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:402(para)
msgid "This flag is not valid for a thin LUN."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:408(option)
msgid "WriteType"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:415(para) ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:433(para)
msgid "1"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:418(para)
msgid "Cache write method. The method can be write back, write through, or Required write back. The default value is <literal>1</literal>, indicating write back."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:427(option)
msgid "MirrorSwitch"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:436(para)
msgid "Cache mirroring policy. The default value is <literal>1</literal>, indicating that a mirroring policy is used."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:442(option)
msgid "Prefetch Type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:445(para)
msgid "3"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:448(para)
msgid "Cache prefetch strategy. The strategy can be constant prefetch, variable prefetch, or intelligent prefetch. Default value is <literal>3</literal>, which indicates intelligent prefetch and is not required for the HVS."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:457(option)
msgid "Prefetch Value"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:460(para) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:433(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:385(para)
msgid "0"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:463(para)
msgid "Cache prefetch value."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:467(option)
msgid "StoragePool"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:473(para)
msgid "Name of a storage pool that you want to use. Not required for the Dorado2100 G2."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:479(option)
msgid "DefaultTargetIP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:485(para)
msgid "Default IP address of the iSCSI port provided for compute nodes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:490(option)
msgid "Initiator Name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:496(para)
msgid "Name of a compute node initiator."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:500(option)
msgid "Initiator TargetIP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:506(para)
msgid "IP address of the iSCSI port provided for compute nodes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:511(option)
msgid "OSType"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:514(para)
msgid "Linux"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:516(td)
msgid "The OS type for a compute node."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:519(option)
msgid "HostIP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:524(td)
msgid "The IPs for compute nodes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:531(para)
msgid "You can configure one iSCSI target port for each or all compute nodes. The driver checks whether a target port IP address is configured for the current compute node. If not, select <option>DefaultTargetIP</option>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:538(para)
msgid "You can configure multiple storage pools in one configuration file, which supports the use of multiple storage pools in a storage system. (HVS allows configuration of only one storage pool.)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:545(para)
msgid "For details about LUN configuration information, see the <placeholder-1/> command in the command-line interface (CLI) documentation or run the <placeholder-2/> on the storage system CLI."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml:554(para)
msgid "After the driver is loaded, the storage system obtains any modification of the driver configuration file in real time and you do not need to restart the <systemitem class=\"service\">cinder-volume</systemitem> service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:4(title)
msgid "IBM GPFS volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:5(para)
msgid "IBM General Parallel File System (GPFS) is a cluster file system that provides concurrent access to file systems from multiple nodes. The storage provided by these nodes can be direct attached, network attached, SAN attached, or a combination of these methods. GPFS provides many features beyond common data access, including data replication, policy based storage management, and space efficient file snapshot and clone operations."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:14(title)
msgid "How the GPFS driver works"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:15(para)
msgid "The GPFS driver enables the use of GPFS in a fashion similar to that of the NFS driver. With the GPFS driver, instances do not actually access a storage device at the block level. Instead, volume backing files are created in a GPFS file system and mapped to instances, which emulate a block device."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:23(para)
msgid "GPFS software must be installed and running on nodes where Block Storage and Compute services run in the OpenStack environment. A GPFS file system must also be created and mounted on these nodes before starting the <literal>cinder-volume</literal> service. The details of these GPFS specific steps are covered in <citetitle>GPFS: Concepts, Planning, and Installation Guide</citetitle> and <citetitle>GPFS: Administration and Programming Reference</citetitle>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:35(para)
msgid "Optionally, the Image Service can be configured to store images on a GPFS file system. When a Block Storage volume is created from an image, if both image data and volume data reside in the same GPFS file system, the data from image file is moved efficiently to the volume file using copy-on-write optimization strategy."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:43(title)
msgid "Enable the GPFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:44(para)
msgid "To use the Block Storage service with the GPFS driver, first set the <literal>volume_driver</literal> in <filename>cinder.conf</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:48(para)
msgid "The following table contains the configuration options supported by the GPFS driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:53(para)
msgid "The <literal>gpfs_images_share_mode</literal> flag is only valid if the Image Service is configured to use GPFS with the <literal>gpfs_images_dir</literal> flag. When the value of this flag is <literal>copy_on_write</literal>, the paths specified by the <literal>gpfs_mount_point_base</literal> and <literal>gpfs_images_dir</literal> flags must both reside in the same GPFS file system and in the same GPFS file set."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:66(title)
msgid "Volume creation options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:67(para)
msgid "It is possible to specify additional volume configuration options on a per-volume basis by specifying volume metadata. The volume is created using the specified options. Changing the metadata after the volume is created has no effect. The following table lists the volume creation options supported by the GPFS volume driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:75(caption)
msgid "Volume Create Options for GPFS Volume Drive"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:79(th)
msgid "Metadata Item Name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:85(literal) ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:98(literal) ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:99(literal)
msgid "fstype"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:88(literal)
msgid "fstype=swap"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:86(td)
msgid "Specifies whether to create a file system or a swap area on the new volume. If <placeholder-1/> is specified, the mkswap command is used to create a swap area. Otherwise the mkfs command is passed the specified file system type, for example ext3, ext4 or ntfs."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:96(literal)
msgid "fslabel"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:97(td)
msgid "Sets the file system label for the file system specified by <placeholder-1/> option. This value is only used if <placeholder-2/> is specified."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:103(literal)
msgid "data_pool_name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:105(para)
msgid "Specifies the GPFS storage pool to which the volume is to be assigned. Note: The GPFS storage pool must already have been created."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:111(literal)
msgid "replicas"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:113(para)
msgid "Specifies how many copies of the volume file to create. Valid values are 1, 2, and, for GPFS V3.5.0.7 and later, 3. This value cannot be greater than the value of the <literal>MaxDataReplicas</literal> attribute of the file system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:122(literal)
msgid "dio"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:124(para)
msgid "Enables or disables the Direct I/O caching policy for the volume file. Valid values are <literal>yes</literal> and <literal>no</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:130(literal)
msgid "write_affinity_depth"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:132(para)
msgid "Specifies the allocation policy to be used for the volume file. Note: This option only works if <literal>allow-write-affinity</literal> is set for the GPFS data pool."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:139(literal)
msgid "block_group_factor"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:141(para)
msgid "Specifies how many blocks are laid out sequentially in the volume file to behave as a single large block. Note: This option only works if <literal>allow-write-affinity</literal> is set for the GPFS data pool."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:149(literal)
msgid "write_affinity_failure_group"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:151(para)
msgid "Specifies the range of nodes (in GPFS shared nothing architecture) where replicas of blocks in the volume file are to be written. See <citetitle>GPFS: Administration and Programming Reference</citetitle> for more details on this option."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:162(title)
msgid "Example: Volume creation options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:163(para)
msgid "This example shows the creation of a 50GB volume with an <systemitem>ext4</systemitem> file system labeled <literal>newfs</literal> and direct IO enabled:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:169(title)
msgid "Operational notes for GPFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:171(title) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:644(title)
msgid "Snapshots and clones"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:172(para)
msgid "Volume snapshots are implemented using the GPFS file clone feature. Whenever a new snapshot is created, the snapshot file is efficiently created as a read-only clone parent of the volume, and the volume file uses copy-on-write optimization strategy to minimize data movement."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml:178(para)
msgid "Similarly when a new volume is created from a snapshot or from an existing volume, the same approach is taken. The same approach is also used when a new volume is created from an Image Service image, if the source image is in raw format, and <literal>gpfs_images_share_mode</literal> is set to <literal>copy_on_write</literal>."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:62(None)
msgid "@@image: '../../../common/figures/xenapinfs/local_config.png'; md5=16a3864b0ec636518335246360438fd1"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:78(None)
msgid "@@image: '../../../common/figures/xenapinfs/remote_config.png'; md5=eab22f6aa5413c2043936872ea44e459"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:4(title)
msgid "XenAPINFS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:5(para)
msgid "XenAPINFS is a Block Storage (Cinder) driver that uses an NFS share through the XenAPI Storage Manager to store virtual disk images and expose those virtual disks as volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:8(para)
msgid "This driver does not access the NFS share directly. It accesses the share only through XenAPI Storage Manager. Consider this driver as a reference implementation for use of the XenAPI Storage Manager in OpenStack (present in XenServer and XCP)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:14(title) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:85(title) ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:193(title)
msgid "Requirements"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:17(para)
msgid "A XenServer/XCP installation that acts as Storage Controller. This hypervisor is known as the storage controller."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:22(para)
msgid "Use XenServer/XCP as your hypervisor for Compute nodes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:26(para)
msgid "An NFS share that is configured for XenServer/XCP. For specific requirements and export options, see the administration guide for your specific XenServer version. The NFS share must be accessible by all XenServers components within your cloud."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:34(para)
msgid "To create volumes from XenServer type images (vhd tgz files), XenServer Nova plug-ins are also required on the storage controller."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:40(para)
msgid "You can use a XenServer as a storage controller and compute node at the same time. This minimal configuration consists of a XenServer/XCP box and an NFS share."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:47(title)
msgid "Configuration patterns"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:50(para)
msgid "Local configuration (Recommended): The driver runs in a virtual machine on top of the storage controller. With this configuration, you can create volumes from <literal>qemu-img</literal>-supported formats."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:57(title)
msgid "Local configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:68(para)
msgid "Remote configuration: The driver is not a guest VM of the storage controller. With this configuration, you can only use XenServer vhd-type images to create volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:73(title)
msgid "Remote configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:86(title) ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:299(caption) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:368(caption)
msgid "Configuration options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:87(para)
msgid "Assuming the following setup:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:90(para)
msgid "XenServer box at <literal>10.2.2.1</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:94(para)
msgid "XenServer password is <literal>r00tme</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:98(para)
msgid "NFS server is <literal>nfs.example.com</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:102(para)
msgid "NFS export is at <literal>/volumes</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:106(para)
msgid "To use XenAPINFS as your cinder driver, set these configuration options in the <filename>cinder.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml:115(para)
msgid "The following table shows the configuration options that the XenAPINFS driver supports:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/sheepdog-driver.xml:6(title)
msgid "Sheepdog driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/sheepdog-driver.xml:7(para)
msgid "Sheepdog is an open-source distributed storage system that provides a virtual storage pool utilizing internal disk of commodity servers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/sheepdog-driver.xml:10(para)
msgid "Sheepdog scales to several hundred nodes, and has powerful virtual disk management features like snapshot, cloning, rollback, thin provisioning."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/sheepdog-driver.xml:13(para)
msgid "More information can be found on <link href=\"http://sheepdog.github.io/sheepdog/\">Sheepdog Project</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/sheepdog-driver.xml:15(para)
msgid "This driver enables use of Sheepdog through Qemu/KVM."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/sheepdog-driver.xml:16(para)
msgid "Set the following <literal>volume_driver</literal> in <filename>cinder.conf</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml:5(title)
msgid "SolidFire"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml:6(para)
msgid "The SolidFire Cluster is a high performance all SSD iSCSI storage device that provides massive scale out capability and extreme fault tolerance. A key feature of the SolidFire cluster is the ability to set and modify during operation specific QoS levels on a volume for volume basis. The SolidFire cluster offers this along with de-duplication, compression, and an architecture that takes full advantage of SSDs."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml:14(para)
msgid "To configure the use of a SolidFire cluster with Block Storage, modify your <filename>cinder.conf</filename> file as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml:23(para)
msgid "The SolidFire driver creates a unique account prefixed with <literal>$cinder-volume-service-hostname-$tenant-id</literal> on the SolidFire cluster for each tenant that accesses the cluster through the Volume API. Unfortunately, this account formation results in issues for High Availability (HA) installations and installations where the <systemitem class=\"service\">cinder-volume</systemitem> service can move to a new node. HA installations can return an <errortext>Account Not Found</errortext> error because the call to the SolidFire cluster is not always going to be sent from the same node. In installations where the <systemitem class=\"service\">cinder-volume</systemitem> service moves to a new node, the same issue can occur when you perform operations on existing volumes, such as clone, extend, delete, and so on."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml:41(para)
msgid "Set the <option>sf_account_prefix</option> option to an empty string ('') in the <filename>cinder.conf</filename> file. This setting results in unique accounts being created on the SolidFire cluster, but the accounts are prefixed with the <systemitem>tenant-id</systemitem> or any unique identifier that you choose and are independent of the host where the <systemitem class=\"service\">cinder-volume</systemitem> service resides."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:19(None)
msgid "@@image: '../../../common/figures/ceph/ceph-architecture.png'; md5=f7e854c9dbfb64534c47c3583e774c81"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:4(title)
msgid "Ceph RADOS Block Device (RBD)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:5(para)
msgid "If you use KVM or QEMU as your hypervisor, you can configure the Compute service to use <link href=\"http://ceph.com/ceph-storage/block-storage/\"> Ceph RADOS block devices (RBD)</link> for volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:15(title)
msgid "Ceph architecture"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:9(para)
msgid "Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a POSIX-compliant distributed file system. The platform can auto-scale to the exabyte level and beyond. It runs on commodity hardware, is self-healing and self-managing, and has no single point of failure. Ceph is in the Linux kernel and is integrated with the OpenStack cloud operating system. Due to its open-source nature, you can install and use this portable storage platform in public or private clouds. <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:25(title)
msgid "RADOS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:26(para)
msgid "Ceph is based on <emphasis>RADOS: Reliable Autonomic Distributed Object Store</emphasis>. RADOS distributes objects across the storage cluster and replicates objects for fault tolerance. RADOS contains the following major components:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:32(para)
msgid "<emphasis>Object Storage Device (OSD) Daemon</emphasis>. The storage daemon for the RADOS service, which interacts with the OSD (physical or logical storage unit for your data)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:35(para)
msgid "You must run this daemon on each server in your cluster. For each OSD, you can have an associated hard drive disk. For performance purposes, pool your hard drive disk with raid arrays, logical volume management (LVM), or B-tree file system (<systemitem>Btrfs</systemitem>) pooling. By default, the following pools are created: data, metadata, and RBD."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:42(para)
msgid "<emphasis>Meta-Data Server (MDS)</emphasis>. Stores metadata. MDSs build a POSIX file system on top of objects for Ceph clients. However, if you do not use the Ceph file system, you do not need a metadata server."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:49(para)
msgid "<emphasis>Monitor (MON)</emphasis>. A lightweight daemon that handles all communications with external applications and clients. It also provides a consensus for distributed decision making in a Ceph/RADOS cluster. For instance, when you mount a Ceph shared on a client, you point to the address of a MON server. It checks the state and the consistency of the data. In an ideal setup, you must run at least three <code>ceph-mon</code> daemons on separate servers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:58(para)
msgid "Ceph developers recommend that you use <systemitem>Btrfs</systemitem> as a file system for storage. XFS might be a better alternative for production environments;XFS is an excellent alternative to Btrfs. The ext4 file system is also compatible but does not exploit the power of Ceph."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:63(para)
msgid "If using <systemitem>Btrfs</systemitem>, ensure that you use the correct version (see <link href=\"http://ceph.com/docs/master/start/os-recommendations/.\">Ceph Dependencies</link>)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:66(para)
msgid "For more information about usable file systems, see <link href=\"http://ceph.com/ceph-storage/file-system/\">ceph.com/ceph-storage/file-system/</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:72(title)
msgid "Ways to store, use, and expose data"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:73(para)
msgid "To store and access your data, you can use the following storage systems:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:77(para)
msgid "<emphasis>RADOS</emphasis>. Use as an object, default storage mechanism."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:81(para)
msgid "<emphasis>RBD</emphasis>. Use as a block device. The Linux kernel RBD (RADOS block device) driver allows striping a Linux block device over multiple distributed object store data objects. It is compatible with the KVM RBD image."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:88(para)
msgid "<emphasis>CephFS</emphasis>. Use as a file, POSIX-compliant file system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:92(para)
msgid "Ceph exposes RADOS; you can access it through the following interfaces:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:95(para)
msgid "<emphasis>RADOS Gateway</emphasis>. OpenStack Object Storage and Amazon-S3 compatible RESTful interface (see <link href=\"http://ceph.com/wiki/RADOS_Gateway\">RADOS_Gateway</link>)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:101(para)
msgid "<emphasis>librados</emphasis>, and its related C/C++ bindings."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:104(para)
msgid "<emphasis>RBD and QEMU-RBD</emphasis>. Linux kernel and QEMU block devices that stripe data across multiple objects."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:111(title) ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:123(title)
msgid "Driver options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml:112(para)
msgid "The following table contains the configuration options supported by the Ceph RADOS Block Device driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-xiv-volume-driver.xml:5(title)
msgid "IBM XIV and DS8000 volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-xiv-volume-driver.xml:7(para)
msgid "The IBM Storage Driver for OpenStack is a Block Storage driver that supports IBM XIV and IBM DS8000 storage systems over Fiber channel and iSCSI."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-xiv-volume-driver.xml:12(para)
msgid "Set the following in your <filename>cinder.conf</filename>, and use the following options to configure it."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-xiv-volume-driver.xml:20(para)
msgid "To use the IBM Storage Driver for OpenStack you must download and install the package available at: <link href=\"http://www.ibm.com/support/fixcentral/swg/selectFixes?parent=Enterprise%2BStorage%2BServers&amp;product=ibm/Storage_Disk/XIV+Storage+System+%282810,+2812%29&amp;release=All&amp;platform=All&amp;function=all\">http://www.ibm.com/support/fixcentral/swg/selectFixes?parent=Enterprise%2BStorage%2BServers&amp;product=ibm/Storage_Disk/XIV+Storage+System+%282810,+2812%29&amp;release=All&amp;platform=All&amp;function=all</link>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-xiv-volume-driver.xml:27(para)
msgid "For full documentation refer to IBM's online documentation available at <link href=\"http://pic.dhe.ibm.com/infocenter/strhosts/ic/topic/com.ibm.help.strghosts.doc/nova-homepage.html\">http://pic.dhe.ibm.com/infocenter/strhosts/ic/topic/com.ibm.help.strghosts.doc/nova-homepage.html</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:9(title)
msgid "HDS HUS iSCSI driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:10(para)
msgid "This Block Storage volume driver provides iSCSI support for <link href=\"http://www.hds.com/products/storage-systems/hitachi-unified-storage-100-family.html\">HUS (Hitachi Unified Storage) </link> arrays such as, HUS-110, HUS-130, and HUS-150."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:16(para)
msgid "Use the HDS <placeholder-1/> command to communicate with an HUS array. You can download this utility package from the HDS support site (<link href=\"https://hdssupport.hds.com/\">https://hdssupport.hds.com/</link>)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:21(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:21(para)
msgid "Platform: Ubuntu 12.04 LTS or newer."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:45(para) ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:63(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:87(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:61(para) ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:100(para) ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:77(para) ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:39(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:58(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:54(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:49(para)
msgid "Extend a volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:48(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:90(para) ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:80(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:67(para) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:36(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:52(para)
msgid "Get volume statistics."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:53(title) ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:14(title) ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:85(title) ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:134(title) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:35(title) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:57(title) ./doc/config-reference/compute/section_hypervisor_vmware.xml:250(td) ./doc/config-reference/compute/section_hypervisor_vmware.xml:316(td)
msgid "Configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:58(para)
msgid "Do not confuse differentiated services with the OpenStack Block Storage volume services."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:54(para)
msgid "The HDS driver supports the concept of differentiated services, where a volume type can be associated with the fine-tuned performance characteristics of an HDP the dynamic pool where volumes are created. For instance, an HDP can consist of fast SSDs to provide speed. HDP can provide a certain reliability based on things like its RAID level characteristics. HDS driver maps volume type to the volume_type option in its configuration file."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:66(para)
msgid "Configuration is read from an XML-format file. Examples are shown for single and multi back-end cases."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:71(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:70(para)
msgid "Configuration is read from an XML file. This example shows the configuration for single back-end and for multi-back-end cases."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:80(para)
msgid "It is okay to manage multiple HUS arrays by using multiple OpenStack Block Storage instances (or servers)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:76(para)
msgid "It is not recommended to manage an HUS array simultaneously from multiple OpenStack Block Storage instances or servers. <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:89(title)
msgid "HUS setup"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:90(para)
msgid "Before using iSCSI services, use the HUS UI to create an iSCSI domain for each EVS providing iSCSI services."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:94(title) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:92(title)
msgid "Single back-end"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:95(para)
msgid "In a single back-end deployment, only one OpenStack Block Storage instance runs on the OpenStack Block Storage server and controls one HUS array: this deployment requires these configuration files:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:108(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:108(para)
msgid "The configuration file location may differ."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:101(para)
msgid "Set the <option>hds_cinder_config_file</option> option in the <filename>/etc/cinder/cinder.conf</filename> file to use the HDS volume driver. This option points to a configuration file.<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:115(para)
msgid "Configure <option>hds_cinder_config_file</option> at the location specified previously. For example, <filename>/opt/hds/hus/cinder_hds_conf.xml</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:146(title) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:163(title)
msgid "Multi back-end"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:147(para)
msgid "In a multi back-end deployment, more than one OpenStack Block Storage instance runs on the same server. In this example, two HUS arrays are used, possibly providing different storage performance:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:153(para)
msgid "Configure <filename>/etc/cinder/cinder.conf</filename>: the <literal>hus1</literal><option>hus2</option> configuration blocks are created. Set the <option>hds_cinder_config_file</option> option to point to a unique configuration file for each block. Set the <option>volume_driver</option> option for each back-end to <literal>cinder.volume.drivers.hds.hds.HUSDriver</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:177(para)
msgid "Configure <filename>/opt/hds/hus/cinder_hus1_conf.xml</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:203(para)
msgid "Configure the <filename>/opt/hds/hus/cinder_hus2_conf.xml</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:232(title) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:292(title)
msgid "Type extra specs: <option>volume_backend</option> and volume type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:234(para)
msgid "If you use volume types, you must configure them in the configuration file and set the <option>volume_backend_name</option> option to the appropriate back-end. In the previous multi back-end example, the <literal>platinum</literal> volume type is served by hus-2, and the <literal>regular</literal> volume type is served by hus-1."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:245(title)
msgid "Non differentiated deployment of HUS arrays"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:246(para)
msgid "You can deploy multiple OpenStack Block Storage instances that each control a separate HUS array. Each instance has no volume type associated with it. The OpenStack Block Storage filtering algorithm selects the HUS array with the largest available free space. In each configuration file, you must define the <literal>default</literal><option>volume_type</option> in the service labels."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:257(title)
msgid "HDS iSCSI volume driver configuration options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:264(para)
msgid "Each of these four labels has no relative precedence or weight."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:258(para)
msgid "These details apply to the XML format configuration file that is read by HDS volume driver. These differentiated service labels are predefined: <literal>svc_0</literal>, <literal>svc_1</literal>, <literal>svc_2</literal>, and <literal>svc_3</literal><placeholder-1/>. Each respective service label associates with these parameters and tags:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:270(para)
msgid "<option>volume-types</option>: A create_volume call with a certain volume type shall be matched up with this tag. <literal>default</literal> is special in that any service associated with this type is used to create volume when no other labels match. Other labels are case sensitive and should exactly match. If no configured volume_types match the incoming requested type, an error occurs in volume creation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:281(para)
msgid "<option>HDP</option>, the pool ID associated with the service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:285(para)
msgid "An iSCSI port dedicated to the service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:294(para)
msgid "The get_volume_stats() always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:288(para)
msgid "Typically a OpenStack Block Storage volume instance has only one such service label. For example, any <literal>svc_0</literal>, <literal>svc_1</literal>, <literal>svc_2</literal>, or <literal>svc_3</literal> can be associated with it. But any mix of these service labels can be used in the same instance <placeholder-1/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:306(td) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:375(td) ./doc/config-reference/compute/section_compute-scheduler.xml:770(th) ./doc/config-reference/compute/section_compute-scheduler.xml:874(th)
msgid "Option"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:314(option) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:383(option)
msgid "mgmt_ip0"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:318(para)
msgid "Management Port 0 IP address"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:322(option)
msgid "mgmt_ip1"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:326(para)
msgid "Management Port 1 IP address"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:330(option)
msgid "hus_cmd"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:335(para)
msgid "<option>hus_cmd</option> is the command used to communicate with the HUS array. If it is not set, the default value is <placeholder-1/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:342(option) ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:70(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:78(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:142(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:350(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:407(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:486(replaceable) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:314(replaceable) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:412(option)
msgid "username"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:347(para)
msgid "Username is required only if secure mode is used"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:352(option) ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:71(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:79(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:143(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:351(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:408(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:487(replaceable) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:316(replaceable) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:421(option) ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:59(replaceable)
msgid "password"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:357(para)
msgid "Password is required only if secure mode is used"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:363(option) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:431(option)
msgid "svc_0, svc_1, svc_2, svc_3"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:367(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:435(para)
msgid "(at least one label has to be defined)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:370(para)
msgid "Service labels: these four predefined names help four different sets of configuration options -- each can specify iSCSI port address, HDP and a unique volume type."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:379(option)
msgid "snapshot"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:384(para)
msgid "A service label which helps specify configuration for snapshots, such as, HDP."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:391(option) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:328(term) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:446(option)
msgid "volume_type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:396(para)
msgid "<option>volume_type</option> tag is used to match volume type. <literal>Default</literal> meets any type of <option>volume_type</option>, or if it is not specified. Any other volume_type is selected if exactly matched during <literal>create_volume</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:408(option) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:352(term) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:462(option)
msgid "iscsi_ip"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:413(para)
msgid "iSCSI port IP address where volume attaches for this volume type."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:419(option) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:339(term) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:473(option)
msgid "hdp"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:424(para)
msgid "HDP, the pool number where volume, or snapshot should be created."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:430(option)
msgid "lun_start"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:435(para)
msgid "LUN allocation starts at this number."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:441(option)
msgid "lun_end"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:444(para)
msgid "4096"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hus-driver.xml:446(para)
msgid "LUN allocation is up to, but not including, this number."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:4(title)
msgid "HP 3PAR Fibre Channel and iSCSI drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:5(para)
msgid "The <filename>HP3PARFCDriver</filename> and <filename>HP3PARISCSIDriver</filename> drivers, which are based on the Block Storage service (Cinder) plug-in architecture, run volume operations by communicating with the HP 3PAR storage system over HTTP, HTTPS, and SSH connections. The HTTP and HTTPS communications use <package>hp3parclient</package>, which is part of the Python standard library."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:13(para)
msgid "For information about how to manage HP 3PAR storage systems, see the HP 3PAR user documentation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:17(para)
msgid "To use the HP 3PAR drivers, install the following software and components on the HP 3PAR storage system:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:22(para)
msgid "HP 3PAR Operating System software version 3.1.3 MU1 or higher"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:26(para)
msgid "HP 3PAR Web Services API Server must be enabled and running"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:30(para)
msgid "One Common Provisioning Group (CPG)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:33(para)
msgid "Additionally, you must install the <package>hp3parclient</package> version 3.1.1 or newer from the Python standard library on the system with the enabled Block Storage service volume drivers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:66(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:93(para)
msgid "Migrate a volume with back-end assistance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:69(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:96(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:64(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:57(para)
msgid "Retype a volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:72(para)
msgid "Manage and unmanage a volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:75(para)
msgid "Volume type support for both HP 3PAR drivers includes the ability to set the following capabilities in the OpenStack Block Storage API <filename>cinder.api.contrib.types_extra_specs</filename> volume type extra specs extension module:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:82(literal)
msgid "hp3par:cpg"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:85(literal)
msgid "hp3par:snap_cpg"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:88(literal)
msgid "hp3par:provisioning"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:91(literal)
msgid "hp3par:persona"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:94(literal)
msgid "hp3par:vvs"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:97(para)
msgid "To work with the default filter scheduler, the key values are case sensitive and scoped with <literal>hp3par:</literal>. For information about how to set the key-value pairs and associate them with a volume type, run the following command: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:106(para)
msgid "Volumes that are cloned only support extra specs keys cpg, snap_cpg, provisioning and vvs. The others are ignored. In addition the comments section of the cloned volume in the HP 3PAR StoreServ storage array is not populated."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:112(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:157(para)
msgid "If volume types are not used or a particular key is not set for a volume type, the following defaults are used:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:117(para)
msgid "<literal>hp3par:cpg</literal> - Defaults to the <literal>hp3par_cpg</literal> setting in the <filename>cinder.conf</filename> file."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:122(para)
msgid "<literal>hp3par:snap_cpg</literal> - Defaults to the <literal>hp3par_snap</literal> setting in the <filename>cinder.conf</filename> file. If <literal>hp3par_snap</literal> is not set, it defaults to the <literal>hp3par_cpg</literal> setting."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:130(para)
msgid "<literal>hp3par:provisioning</literal> - Defaults to thin provisioning, the valid values are <literal>thin</literal> and <literal>full</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:136(para)
msgid "<literal>hp3par:persona</literal> - Defaults to the <literal>2 - Generic-ALUA</literal> persona. The valid values are, <literal>1 - Generic</literal>, <literal>2 - Generic-ALUA</literal>, <literal>6 - Generic-legacy</literal>, <literal>7 - HPUX-legacy</literal>, <literal>8 - AIX-legacy</literal>, <literal>9 - EGENERA</literal>, <literal>10 - ONTAP-legacy</literal>, <literal>11 - VMware</literal>, <literal>12 - OpenVMS</literal>, <literal>13 - HPUX</literal>, and <literal>15 - WindowsServer</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:150(para)
msgid "QoS support for both HP 3PAR drivers includes the ability to set the following capabilities in the OpenStack Block Storage API <filename>cinder.api.contrib.qos_specs_manage</filename> qos specs extension module:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:157(literal)
msgid "minBWS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:160(literal)
msgid "maxBWS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:163(literal)
msgid "minIOPS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:166(literal)
msgid "maxIOPS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:169(literal)
msgid "latency"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:172(literal)
msgid "priority"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:175(para)
msgid "The qos keys above no longer require to be scoped but must be created and associated to a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands: <placeholder-1/><placeholder-2/><placeholder-3/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:185(para)
msgid "The following keys require that the HP 3PAR StoreServ storage array has a Priority Optimization license installed."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:190(para)
msgid "<literal>hp3par:vvs</literal> - The virtual volume set name that has been predefined by the Administrator with Quality of Service (QoS) rules associated to it. If you specify extra_specs <literal>hp3par:vvs</literal>, the qos_specs <literal>minIOPS</literal>, <literal>maxIOPS</literal>, <literal>minBWS</literal>, and <literal>maxBWS</literal> settings are ignored."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:201(para)
msgid "<literal>minBWS</literal> - The QoS I/O issue bandwidth minimum goal in MBs. If not set, the I/O issue bandwidth rate has no minimum goal."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:206(para)
msgid "<literal>maxBWS</literal> - The QoS I/O issue bandwidth rate limit in MBs. If not set, the I/O issue bandwidth rate has no limit."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:211(para)
msgid "<literal>minIOPS</literal> - The QoS I/O issue count minimum goal. If not set, the I/O issue count has no minimum goal."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:216(para)
msgid "<literal>maxIOPS</literal> - The QoS I/O issue count rate limit. If not set, the I/O issue count rate has no limit."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:221(para)
msgid "<literal>latency</literal> - The latency goal in milliseconds."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:225(para)
msgid "<literal>priority</literal> - The priority of the QoS rule over other rules. If not set, the priority is normal, valid values are low, normal and high."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:232(para)
msgid "Since the Icehouse release, minIOPS and maxIOPS must be used together to set I/O limits. Similarly, minBWS and maxBWS must be used together. If only one is set the other will be set to the same value."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:239(title)
msgid "Enable the HP 3PAR Fibre Channel and iSCSI drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:241(para)
msgid "The <filename>HP3PARFCDriver</filename> and <filename>HP3PARISCSIDriver</filename> are installed with the OpenStack software."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:246(para)
msgid "Install the <filename>hp3parclient</filename> Python package on the OpenStack Block Storage system. <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:253(para)
msgid "Verify that the HP 3PAR Web Services API server is enabled and running on the HP 3PAR storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:258(para)
msgid "Log onto the HP 3PAR storage system with administrator access."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:261(replaceable)
msgid "&lt;HP 3PAR IP Address&gt;"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:264(para)
msgid "View the current state of the Web Services API Server."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:271(para)
msgid "If the Web Services API Server is disabled, start it."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:280(para)
msgid "If the HTTP or HTTPS state is disabled, enable one of them. <placeholder-1/> or <placeholder-2/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:287(para)
msgid "To stop the Web Services API Server, use the stopwsapi command. For other options run the <placeholder-1/> command."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:294(para)
msgid "If you are not using an existing CPG, create a CPG on the HP 3PAR storage system to be used as the default location for creating volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:302(emphasis) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:236(emphasis) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:356(emphasis)
msgid "## REQUIRED SETTINGS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:337(emphasis) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:253(emphasis) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:376(emphasis)
msgid "## OPTIONAL SETTINGS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:353(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:261(para)
msgid "You can enable only one driver on each cinder instance unless you enable multiple back-end support. See the Cinder multiple back-end support instructions to enable this feature."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:360(para)
msgid "You can configure one or more iSCSI addresses by using the <option>hp3par_iscsi_ips</option> option. When you configure multiple addresses, the driver selects the iSCSI port with the fewest active volumes at attach time. The IP address might include an IP port by using a colon (<literal>:</literal>) to separate the address from port. If you do not define an IP port, the default port 3260 is used. Separate IP addresses with a comma (<literal>,</literal>). The <option>iscsi_ip_address</option>/<option>iscsi_port</option> options might be used as an alternative to <option>hp3par_iscsi_ips</option> for single port iSCSI configuration."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:379(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:290(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:389(para)
msgid "Save the changes to the <filename>cinder.conf</filename> file and restart the <systemitem class=\"service\">cinder-volume</systemitem> service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml:385(para)
msgid "The HP 3PAR Fibre Channel and iSCSI drivers are now enabled on your OpenStack system. If you experience problems, review the Block Storage service log files for errors."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:8(title)
msgid "Windows iSCSI volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:10(para)
msgid "Windows Server 2012 and Windows Storage Server 2012 offer an integrated iSCSI Target service that can be used with OpenStack Block Storage in your stack. Being entirely a software solution, consider it in particular for mid-sized networks where the costs of a SAN might be excessive."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:15(para)
msgid "The Windows <systemitem class=\"service\">cinder-volume</systemitem> driver works with OpenStack Compute on any hypervisor. It includes snapshotting support and the “boot from volume” feature."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:19(para)
msgid "This driver creates volumes backed by fixed-type VHD images on Windows Server 2012 and dynamic-type VHDX on Windows Server 2012 R2, stored locally on a user-specified path. The system uses those images as iSCSI disks and exports them through iSCSI targets. Each volume has its own iSCSI target."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:24(para)
msgid "This driver has been tested with Windows Server 2012 and Windows Server R2 using the Server and Storage Server distributions."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:27(para)
msgid "Install the <systemitem class=\"service\">cinder-volume</systemitem> service as well as the required Python components directly onto the Windows node."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:31(para)
msgid "You may install and configure <systemitem class=\"service\">cinder-volume </systemitem> and its dependencies manually using the following guide or you may use the <literal>Cinder Volume Installer</literal>, presented below."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:36(title)
msgid "Installing using the OpenStack cinder volume installer"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:38(para)
msgid "In case you want to avoid all the manual setup, you can use Cloudbase Solutions installer. You can find it at <link href=\"https://www.cloudbase.it/downloads/CinderVolumeSetup_Beta.msi\"> https://www.cloudbase.it/downloads/CinderVolumeSetup_Beta.msi</link>. It installs an independent Python environment, in order to avoid conflicts with existing applications, dynamically generates a <filename>cinder.conf </filename> file based on the parameters provided by you."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:46(para)
msgid "<systemitem class=\"service\">cinder-volume</systemitem> will be configured to run as a Windows Service, which can be restarted using:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:52(para)
msgid "The installer can also be used in unattended mode. More details about how to use the installer and its features can be found at <link href=\"https://www.cloudbase.it\">https://www.cloudbase.it</link>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:58(title)
msgid "Windows Server configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:60(para)
msgid "The required service in order to run <systemitem class=\"service\"> cinder-volume</systemitem> on Windows is <literal>wintarget</literal>. This will require the iSCSI Target Server Windows feature to be installed. You can install it by running the following command:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:69(para)
msgid "The Windows Server installation requires at least 16 GB of disk space. The volumes hosted by this node need the extra space."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:73(para)
msgid "For <systemitem class=\"service\">cinder-volume</systemitem> to work properly, you must configure NTP as explained in <xref linkend=\"configure-ntp-windows\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:77(para)
msgid "Next, install the requirements as described in <xref linkend=\"windows-requirements\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:82(title)
msgid "Getting the code"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:84(para)
msgid "Git can be used to download the necessary source code. The installer to run Git on Windows can be downloaded here:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:88(link) ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:320(link)
msgid "https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:91(para)
msgid "Once installed, run the following to clone the OpenStack Block Storage code."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:98(title)
msgid "Configure cinder-volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:100(para)
msgid "The <filename>cinder.conf</filename> file may be placed in <filename>C:\\etc\\cinder</filename>. Below is a config sample for using the Windows iSCSI Driver:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:108(replaceable) ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:109(replaceable) ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:111(replaceable) ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:161(replaceable) ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:360(replaceable) ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:375(replaceable) ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:384(replaceable) ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:400(replaceable)
msgid "IP_ADDRESS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:119(para)
msgid "The following table contains a reference to the only driver specific option that will be used by the Block Storage Windows driver:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:126(title)
msgid "Running cinder-volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml:128(para)
msgid "After configuring <systemitem class=\"service\">cinder-volume</systemitem> using the <filename>cinder.conf</filename> file, you may use the following commands to install and run the service (note that you must replace the variables with the proper paths):"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:4(title)
msgid "Nexenta drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:5(para)
msgid "NexentaStor Appliance is NAS/SAN software platform designed for building reliable and fast network storage arrays. The Nexenta Storage Appliance uses ZFS as a disk management system. NexentaStor can serve as a storage node for the OpenStack and its virtual servers through iSCSI and NFS protocols."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:9(para)
msgid "With the NFS option, every Compute volume is represented by a directory designated to be its own file system in the ZFS file system. These file systems are exported using NFS."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:12(para)
msgid "With either option some minimal setup is required to tell OpenStack which NexentaStor servers are being used, whether they are supporting iSCSI and/or NFS and how to access each of the servers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:16(para)
msgid "Typically the only operation required on the NexentaStor servers is to create the containing directory for the iSCSI or NFS exports. For NFS this containing directory must be explicitly exported via NFS. There is no software that must be installed on the NexentaStor servers; they are controlled using existing management plane interfaces."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:24(title)
msgid "Nexenta iSCSI driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:25(para)
msgid "The Nexenta iSCSI driver allows you to use a NexentaStor appliance to store Compute volumes. Every Compute volume is represented by a single zvol in a predefined Nexenta namespace. For every new volume the driver creates a iSCSI target and iSCSI target group that are used to access it from compute hosts."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:29(para)
msgid "The Nexenta iSCSI volume driver should work with all versions of NexentaStor. The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A pool and an enclosing namespace must be created for all iSCSI volumes to be accessed through the volume driver. This should be done as specified in the release specific NexentaStor documentation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:36(para)
msgid "The NexentaStor Appliance iSCSI driver is selected using the normal procedures for one or multiple back-end volume drivers. You must configure these items for each NexentaStor appliance that the iSCSI volume driver controls:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:42(title)
msgid "Enable the Nexenta iSCSI driver and related options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:44(para)
msgid "This table contains the options supported by the Nexenta iSCSI driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:48(para)
msgid "To use Compute with the Nexenta iSCSI driver, first set the <code>volume_driver</code>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:52(para)
msgid "Then, set the <code>nexenta_host</code> parameter and other parameters from the table, if needed."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:59(title)
msgid "Nexenta NFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:60(para)
msgid "The Nexenta NFS driver allows you to use NexentaStor appliance to store Compute volumes via NFS. Every Compute volume is represented by a single NFS file within a shared directory."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:64(para)
msgid "While the NFS protocols standardize file access for users, they do not standardize administrative actions such as taking snapshots or replicating file systems. The OpenStack Volume Drivers bring a common interface to these operations. The Nexenta NFS driver implements these standard actions using the ZFS management plane that already is deployed on NexentaStor appliances."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:71(para)
msgid "The Nexenta NFS volume driver should work with all versions of NexentaStor. The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A single-parent file system must be created for all virtual disk directories supported for OpenStack. This directory must be created and exported on each NexentaStor appliance. This should be done as specified in the release specific NexentaStor documentation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:78(title)
msgid "Enable the Nexenta NFS driver and related options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:80(para)
msgid "To use Compute with the Nexenta NFS driver, first set the <code>volume_driver</code>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:85(para)
msgid "The following table contains the options supported by the Nexenta NFS driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:89(para)
msgid "Add your list of Nexenta NFS servers to the file you specified with the <code>nexenta_shares_config</code> option. For example, if the value of this option was set to <filename>/etc/cinder/nfs_shares</filename>, then:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:98(para)
msgid "Comments are allowed in this file. They begin with a <code>#</code>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml:100(para)
msgid "Each line in this file represents a NFS share. The first part of the line is the NFS share URL, the second is the connection URL to the NexentaStor Appliance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:6(title)
msgid "Pure Storage volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:8(para)
msgid "The Pure Storage FlashArray volume driver for OpenStack Block Storage interacts with configured Pure Storage arrays and supports various operations."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:10(para)
msgid "This driver can be configured in OpenStack Block Storage to work with the iSCSI storage protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:12(para)
msgid "This driver is compatible with Purity FlashArrays that support the REST API (Purity 3.4.0 and newer) and that are capable of iSCSI connectivity. This release supports installation with OpenStack clusters running the Juno version that use the KVM or QEMU hypervisors together with OpenStack Compute service's libvirt driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:17(title)
msgid "Limitations and known issues"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:18(para)
msgid "If you do not set up the nodes hosting instances to use multipathing, all iSCSI connectivity will use a single physical 10-gigabit Ethernet port on the array. In addition to significantly limiting the available bandwidth, this means you do not have the high-availability and non-disruptive upgrade benefits provided by FlashArray."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:22(para)
msgid "Workaround: You must set up multipathing on your hosts."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:23(para)
msgid "In the default configuration, OpenStack Block Storage does not provision volumes on a backend whose available raw space is less than the logical size of the new volume. Due to Purity's data reduction technology, such a volume could actually fit in the backend, and thus OpenStack Block Storage default configuration does not take advantage of all available space."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:27(para)
msgid "Workaround: Turn off the CapacityFilter."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:33(para)
msgid "Create, delete, attach, detach, clone and extend volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:36(para)
msgid "Create a volume from snapshot."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:39(para)
msgid "Create and delete volume snapshots."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:44(title)
msgid "Configure OpenStack and Purity"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:45(para)
msgid "You need to configure both your Purity array and your OpenStack cluster."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:47(para)
msgid "These instructions assume that the <systemitem class=\"service\">cinder-api</systemitem> and <systemitem class=\"service\">cinder-scheduler</systemitem> services are installed and configured in your OpenStack cluster."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:53(title)
msgid "Configure the OpenStack Block Storage service"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:54(para)
msgid "In these steps, you will edit the <filename>cinder.conf</filename> file to configure OpenStack Block Storage service to enable multipathing and to use the Pure Storage FlashArray as back-end storage."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:58(title)
msgid "Retrieve an API token from Purity"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:59(para)
msgid "The OpenStack Block Storage service configuration requires an API token from Purity. Actions performed by the volume driver use this token for authorization. Also, Purity logs the volume driver's actions as being performed by the user who owns this API token."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:62(para)
msgid "If you created a Purity user account that is dedicated to managing your OpenStack Block Storage volumes, copy the API token from that user account."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:64(para)
msgid "Use the appropriate create or list command below to display and copy the Purity API token:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:68(para)
msgid "To create a new API token:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:69(replaceable) ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:79(replaceable)
msgid "USER"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:70(para) ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:80(para)
msgid "The following is an example output:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:78(para)
msgid "To list an existing API token:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:90(para)
msgid "Copy the API token retrieved (<literal>902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9</literal> from the examples above) to use in the next step."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:94(title)
msgid "Edit the OpenStack Block Storage service configuration file"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:95(para)
msgid "The following sample <filename>/etc/cinder/cinder.conf</filename> configuration lists the relevant settings for a typical Block Storage service using a single Pure Storage array:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:108(replaceable) ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:114(term)
msgid "IP_PURE_MGMT"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:109(replaceable) ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:121(term)
msgid "PURE_API_TOKEN"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:111(para) ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:145(para)
msgid "Replace the following variables accordingly:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:116(para)
msgid "The IP address of the Pure Storage array's management interface or a domain name that resolves to that IP address."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:123(para)
msgid "The Purity Authorization token that the volume driver uses to perform volume management on the Pure Storage array."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:132(title)
msgid "Create Purity host objects"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:133(para)
msgid "Before using the volume driver, follow these steps to create a host in Purity for each OpenStack iSCSI initiator IQN that will connect to the FlashArray."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:135(para)
msgid "For every node that the driver runs on and every compute node that will connect to the FlashArray:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:139(para)
msgid "check the file <filename>/etc/iscsi/initiatorname.iscsi</filename>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:140(para)
msgid "For each IQN in that file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:143(para)
msgid "copy the IQN string and run the following command to create a Purity host for an IQN:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:144(replaceable) ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:148(term)
msgid "IQN"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:144(replaceable) ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:154(term)
msgid "HOST"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:150(para)
msgid "The IQN retrieved from the <filename>/etc/iscsi/initiatorname.iscsi</filename> file"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:156(para)
msgid "An unique friendly name for this entry."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/pure-storage-driver.xml:161(para)
msgid "Do not specify multiple IQNs with the <option>--iqnlist</option> option. Each FlashArray host must be configured to a single OpenStack IQN."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:5(title)
msgid "Oracle ZFSSA iSCSI Driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:6(para)
msgid "Oracle ZFS Storage Appliances (ZFSSAs) provide advanced software to protect data, speed tuning and troubleshooting, and deliver high performance and high availability. Through the Oracle ZFSSA iSCSI Driver, OpenStack Block Storage can use an Oracle ZFSSA as a block storage resource. The driver enables you to create iSCSI volumes that an OpenStack Block Storage server can allocate to any virtual machine running on a compute host. The Oracle ZFSSA iSCSI Driver, version <literal>1.0.0</literal>, supports ZFSSA software release <literal>2013.1.2.0</literal> and later."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:17(para)
msgid "Enable RESTful service on the ZFSSA Storage Appliance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:20(para)
msgid "Create a new user on the appliance with the following authorizations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:23(code)
msgid "scope=stmf - allow_configure=true"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:26(code)
msgid "scope=nas - allow_clone=true, allow_createProject=true, allow_createShare=true, allow_changeSpaceProps=true, allow_changeGeneralProps=true, allow_destroy=true, allow_rollback=true, allow_takeSnap=true"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:29(para)
msgid "You can create a role with authorizations as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:41(para)
msgid "You can create a user with a specific role as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:49(para)
msgid "You can also run this <link href=\"https://java.net/projects/solaris-userland/sources/gate/content/components/openstack/cinder/files/zfssa/cinder.akwf?rev=2047\">workflow</link> to automate the above tasks."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:55(para)
msgid "Ensure that the ZFSSA iSCSI service is online. If the ZFSSA iSCSI service is not online, enable the service by using the BUI, CLI or REST API in the appliance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:67(para)
msgid "Define the following required properties in the <filename>cinder.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:69(replaceable)
msgid "myhost"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:72(replaceable)
msgid "mypool"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:73(replaceable)
msgid "myproject"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:74(replaceable) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:449(literal)
msgid "default"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:75(replaceable)
msgid "w.x.y.z:3260"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:76(replaceable)
msgid "e1000g0"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:77(para)
msgid "Optionally, you can define additional properties."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:78(para)
msgid "Target interfaces can be seen as follows in the CLI:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:86(para)
msgid "Do not use management interfaces for <code>zfssa_target_interfaces</code>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:96(para)
msgid "Create and delete volumes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:99(para)
msgid "Extend volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:102(para)
msgid "Create and delete snapshots"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:105(para)
msgid "Create volume from snapshot"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:108(para)
msgid "Delete volume snapshots"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:111(para)
msgid "Attach and detach volumes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:114(para)
msgid "Get volume stats"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:117(para)
msgid "Clone volumes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zfssa-volume-driver.xml:124(para)
msgid "The Oracle ZFSSA iSCSI Driver supports these options:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:4(title)
msgid "HP LeftHand/StoreVirtual driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:5(para)
msgid "The <filename>HPLeftHandISCSIDriver</filename> is based on the Block Storage service (Cinder) plug-in architecture. Volume operations are run by communicating with the HP LeftHand/StoreVirtual system over HTTPS, or SSH connections. HTTPS communications use the <package>hplefthandclient</package>, which is part of the Python standard library."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:15(para)
msgid "The <filename>HPLeftHandISCSIDriver</filename> can be configured to run in one of two possible modes, legacy mode which uses SSH/CLIQ to communicate with the HP LeftHand/StoreVirtual array, or standard mode which uses a new REST client to communicate with the array. No new functionality has been, or will be, supported in legacy mode. For performance improvements and new functionality, the driver must be configured for standard mode, the <package>hplefthandclient</package> must be downloaded, and HP LeftHand/StoreVirtual Operating System software version 11.5 or higher is required on the array. To configure the driver in standard mode, see <xref linkend=\"hp-lefthand-rest-driver\"/>. To configure the driver in legacy mode, see <xref linkend=\"hp-lefthand-clix-driver\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:33(para)
msgid "For information about how to manage HP LeftHand/StoreVirtual storage systems, see the HP LeftHand/StoreVirtual user documentation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:37(title)
msgid "HP LeftHand/StoreVirtual REST driver standard mode"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:38(para)
msgid "This section describes how to configure the HP LeftHand/StoreVirtual Cinder driver in standard mode."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:43(para)
msgid "To use the HP LeftHand/StoreVirtual driver in standard mode, do the following:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:48(para)
msgid "Install LeftHand/StoreVirtual Operating System software version 11.5 or higher on the HP LeftHand/StoreVirtual storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:54(para)
msgid "Create a cluster group."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:57(para)
msgid "Install the <package>hplefthandclient</package> version 1.0.2 from the Python Package Index on the system with the enabled Block Storage service volume drivers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:99(para)
msgid "When you use back-end assisted volume migration, both source and destination clusters must be in the same HP LeftHand/StoreVirtual management group. The HP LeftHand/StoreVirtual array will use native LeftHand APIs to migrate the volume. The volume cannot be attached or have snapshots to migrate."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:106(para)
msgid "Volume type support for the driver includes the ability to set the following capabilities in the OpenStack Cinder API <filename>cinder.api.contrib.types_extra_specs</filename> volume type extra specs extension module."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:115(literal) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:160(term)
msgid "hplh:provisioning"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:120(literal) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:143(term) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:166(term)
msgid "hplh:ao"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:125(literal) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:147(term) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:172(term)
msgid "hplh:data_pl"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:129(para)
msgid "To work with the default filter scheduler, the key-value pairs are case-sensitive and scoped with <literal>'hplh:'</literal>. For information about how to set the key-value pairs and associate them with a volume type, run the following command:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:140(para)
msgid "The following keys require the HP LeftHand/StoreVirtual storage array be configured for"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:144(para)
msgid "The HP LeftHand/StoreVirtual storage array must be configured for Adaptive Optimization."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:148(para)
msgid "The HP LeftHand/StoreVirtual storage array must be able to support the Data Protection level specified by the extra spec."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:161(para)
msgid "Defaults to <parameter>thin</parameter> provisioning, the valid values are, <parameter>thin</parameter> and <parameter>full</parameter>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:167(para)
msgid "Defaults to <parameter>true</parameter>, the valid values are, <parameter>true</parameter> and <parameter>false</parameter>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:173(para)
msgid "Defaults to <parameter>r-0</parameter>, Network RAID-0 (None), the valid values are,"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:176(para)
msgid "<parameter>r-0</parameter>, Network RAID-0 (None)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:180(para)
msgid "<parameter>r-5</parameter>, Network RAID-5 (Single Parity)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:184(para)
msgid "<parameter>r-10-2</parameter>, Network RAID-10 (2-Way Mirror)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:188(para)
msgid "<parameter>r-10-3</parameter>, Network RAID-10 (3-Way Mirror)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:192(para)
msgid "<parameter>r-10-4</parameter>, Network RAID-10 (4-Way Mirror)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:196(para)
msgid "<parameter>r-6</parameter>, Network RAID-6 (Dual Parity),"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:208(title)
msgid "Enable the HP LeftHand/StoreVirtual iSCSI driver in standard mode"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:210(para) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:338(para)
msgid "The <filename>HPLeftHandISCSIDriver</filename> is installed with the OpenStack software."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:217(para)
msgid "Install the <filename>hplefthandclient</filename> Python package on the OpenStack Block Storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:225(para)
msgid "If you are not using an existing cluster, create a cluster on the HP LeftHand storage system to be used as the cluster for creating volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:231(para)
msgid "Make the following changes in the <filename>/etc/cinder/cinder.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:239(replaceable)
msgid "https://10.10.0.141:8081/lhos"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:242(replaceable) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:362(replaceable)
msgid "lhuser"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:245(replaceable) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:365(replaceable)
msgid "lhpass"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:248(replaceable) ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:371(replaceable)
msgid "ClusterLefthand"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:274(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:80(para)
msgid "CHAP secrets are passed from OpenStack Block Storage to Compute in clear text. This communication should be secured to ensure that CHAP secrets are not discovered."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:281(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:68(para)
msgid "CHAP secrets are added to existing hosts as well as newly-created ones. If the CHAP option is enabled, hosts will not be able to access the storage without the generated secrets."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:267(para)
msgid "If the <option>hplefthand_iscsi_chap_enabled</option> is set to <literal>true</literal>, the driver will associate randomly-generated CHAP secrets with all hosts on the HP LeftHand/StoreVirtual system. OpenStack Compute nodes use these secrets when creating iSCSI connections. <placeholder-1/><placeholder-2/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:298(para)
msgid "The HP LeftHand/StoreVirtual driver is now enabled in standard mode on your OpenStack system. If you experience problems, review the Block Storage service log files for errors."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:305(title)
msgid "HP LeftHand/StoreVirtual CLIQ driver legacy mode"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:306(para)
msgid "This section describes how to configure the HP LeftHand/StoreVirtual Cinder driver in legacy mode."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:309(para)
msgid "The <filename>HPLeftHandISCSIDriver</filename> allows you to use a HP Lefthand/StoreVirtual SAN that supports the CLIQ interface. Every supported volume operation translates into a CLIQ call in the back-end."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:336(title)
msgid "Enable the HP LeftHand/StoreVirtual iSCSI driver in legacy mode"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:345(para)
msgid "If you are not using an existing cluster, create a cluster on the HP Lefthand storage system to be used as the cluster for creating volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:359(replaceable)
msgid "10.10.0.141"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:398(para)
msgid "The HP LeftHand/StoreVirtual driver is now enabled in legacy mode on your OpenStack system. If you experience problems, review the Block Storage service log files for errors."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:402(para)
msgid "To configure the VSA"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:405(para)
msgid "Configure CHAP on each of the <systemitem class=\"service\">nova-compute</systemitem> nodes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml:412(para)
msgid "Add server associations on the VSA with the associated CHAPS and initiator information. The name should correspond to the <parameter>hostname</parameter> of the <parameter>nova-compute</parameter> node. For Xen, this is the hypervisor host name. To do this, use either CLIQ or the Centralized Management Console."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:7(title)
msgid "FUJITSU ETERNUS DX iSCSI and FC drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:8(para)
msgid "The driver runs volume operations by communicating with the back-end FUJITSU storage. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:14(para)
msgid "Supported ETERNUS DX storage systems"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:17(para)
msgid "ETERNUS DX80 S2/DX90 S2"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:20(para)
msgid "ETERNUS DX410 S2/DX440 S2"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:23(para)
msgid "ETERNUS DX8100 S2/DX8700 S2"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:26(para)
msgid "ETERNUS DX100 S3/DX200 S3 (*1)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:29(para)
msgid "ETERNUS DX500 S3/DX600 S3 (*1)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:32(para)
msgid "ETERNUS DX200F (*1)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:35(para)
msgid "*1: Applying the firmware version V10L2x is required."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:40(para)
msgid "ETERNUS DX S3 with Thin Provisioning Pool support these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:64(para)
msgid "ETERNUS DX S3 with RAID Group support these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:85(para)
msgid "ETERNUS DX S2 with Thin Provisioning Pool support these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:103(para)
msgid "ETERNUS DX S2 with RAID Group support these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:120(title)
msgid "Set up the ETERNUS DX drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:124(title) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:106(title)
msgid "Install the <package>python-pywbem</package> package"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:125(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:108(para)
msgid "Install the <package>python-pywbem</package> package for your distribution, as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:129(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:112(para)
msgid "On Ubuntu:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:133(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:116(para)
msgid "On openSUSE:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:137(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:120(para)
msgid "On Fedora:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:143(title)
msgid "Adjust the SMI-S settings for the ETERNUS DX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:144(para)
msgid "The SMI-S of the ETERNUS DX must be enabled in advance. Enable the SMI-S of the ETERNUS DX by using ETERNUS Web GUI or ETERNUS CLI. For more details on this procedure, refer to the ETERNUS Web GUI manuals or the ETERNUS CLI manuals."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:151(para)
msgid "The SMI-S is enabled after the ETERNUS DX is rebooted."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:157(title)
msgid "Create an account"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:158(para)
msgid "To access the ETERNUS DX via SMI-S, a user account with <literal>Admin</literal>, <literal>Storage Admin</literal>, <literal>Maintainer</literal>, or <literal>Software</literal> as a user level is required. Use ETERNUS Web GUI or ETERNUS CLI to register the user account in the ETERNUS DX. For more details on the registration procedure, refer to the ETERNUS Web GUI manuals or the ETERNUS CLI manuals."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:169(title)
msgid "Create the storage pool"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:170(para)
msgid "Create a storage pool for creating volumes in advance. A RAID group or a Thin Provisioning Pool can be specified for the storage pool. Use ETERNUS Web GUI or ETERNUS CLI to create a RAID group or a Thin Provisioning Pool in the ETERNUS DX. For more details on the creation procedure, refer to the ETERNUS Web GUI manuals or the ETERNUS CLI manuals."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:181(title)
msgid "ETERNUS ports settings"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:182(para)
msgid "When the CA port is used, change the following host interface port parameters by using the relevant commands from the ETERNUS CLI."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:189(para)
msgid "Change the port mode to \"CA\". Use the <placeholder-1/> command to change the port mode."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:195(para)
msgid "Enable the host affinity setting. Use the <placeholder-1/> command to change the host affinity setting."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:201(para)
msgid "Example: For FC ports"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:203(para)
msgid "Example: For iSCSI ports"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:207(title)
msgid "Register licenses to the ETERNUS DX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:208(para)
msgid "An Advanced Copy Feature license is required to create snapshots or create volumes from snapshots. Purchase this license separately and register the license in the ETERNUS DX. Note that the Advanced Copy table size setting is also required. For details on registering and configuring the Advanced Copy function, refer to the ETERNUS Web GUI manuals."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:219(title)
msgid "Enable the Snap Data Pool"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:220(para)
msgid "SnapOPC is used for the SnapShot function of the ETERNUS OpenStack VolumeDriver. Since Snap Data Pool (SDP) is required for SnapOPC, create an SDPV and enable the SDP. For more details, refer to the ETERNUS Web GUI manuals or the ETERNUS CLI manuals."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:229(title)
msgid "SAN connection"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:230(para)
msgid "FC and iSCSI can be used as a host interface. The compute node of OpenStack and the ETERNUS DX must be connected to the SAN and be available for communication in advance. To use Fibre Channel switches, zoning settings for the Fibre Channel switches are also required. To use the iSCSI connections, logging in to the iSCSI target is required. The host affinity mode for all of the host interface ports of the ETERNUS DX must also be enabled in advance. For more details, refer to the ETERNUS Web GUI manuals or the ETERNUS CLI manuals."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:245(title)
msgid "Update <filename>cinder.conf</filename> configuration file"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:247(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:159(para)
msgid "Make the following changes in <filename>/etc/cinder/cinder.conf</filename>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:249(para)
msgid "For iSCSI driver, add the following entries, where <literal>10.2.2.2</literal> is the IP address of the ETERNUS DX iSCSI target:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:256(para)
msgid "For FC driver, add the following entries:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:259(para)
msgid "Restart the <systemitem class=\"service\">cinder-volume</systemitem> service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:262(title)
msgid "Create <filename>cinder_fujitsu_eternus_dx.xml</filename> configuration file"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:264(para)
msgid "Create the <filename>/etc/cinder/cinder_fujitsu_eternus_dx.xml</filename> file. You do not need to restart the service for this change."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:266(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:207(para)
msgid "Add the following lines to the XML file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:278(para) ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:222(para)
msgid "Where:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:281(para)
msgid "<systemitem>StorageType</systemitem> is the thin pool from which the user wants to create the volume. Thin pools can be created using ETERNUS WebGUI. If the <literal>StorageType</literal> tag is not defined, you have to define volume types and set the pool name in extra specs."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:290(para)
msgid "<systemitem>EcomServerIp</systemitem> is the IP address of the ETERNUS DX MNT port."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:294(para)
msgid "<systemitem>EcomServerPort</systemitem> is the port number of the ETERNUS DX SMI-S port number."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:298(para)
msgid "<systemitem>EcomUserName</systemitem> and <systemitem>EcomPassword</systemitem> are credentials for the ETERNUS DX."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:303(para)
msgid "<systemitem>SnapPool</systemitem> is the thick pool(RAID Group) for create the snapshot. Thick pools can be created using ETERNUS WebGUI."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:309(para)
msgid "<systemitem>Timeout</systemitem> specifies the maximum number of seconds you want to wait for an operation to finish."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:317(title)
msgid "Volume type support"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:318(para)
msgid "Volume type support enables a single instance of <systemitem>cinder-volume</systemitem> to support multiple pools and thick/thin provisioning."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:321(para)
msgid "When the <literal>StorageType</literal> tag in <filename>cinder_fujitsu_eternus_dx.xml</filename> is used, the pool name is specified in the tag. Only thin provisioning is supported in this case."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:325(para)
msgid "When the <literal>StorageType</literal> tag is not used in <filename>cinder_fujitsu_eternus_dx.xml</filename>, the volume type needs to be used to define a pool name and a provisioning type. The pool name is the name of a pre-created pool. The provisioning type could be either <literal>thin</literal> or <literal>thick</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:331(para)
msgid "Here is an example of how to set up volume type. First create volume types. Then define extra specs for each volume type."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:336(para)
msgid "Create the volume types:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:341(para)
msgid "Setup the volume type extra specs:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:348(para)
msgid "In the above example, two volume types are created. They are <literal>High Performance</literal> and <literal> Standard Performance</literal>. For <literal>High Performance </literal>, <literal>storagetype:pool</literal> is set to <literal>smis_pool</literal> and <literal>storagetype:provisioning </literal> is set to <literal>thick</literal>. Similarly for <literal>Standard Performance</literal>, <literal> storagetype:pool</literal>. is set to <literal>smis_pool2</literal> and <literal>storagetype:provisioning</literal> is set to <literal>thin</literal>. If <literal>storagetype:provisioning </literal> is not specified, it will default to <literal> thin</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/fujitsu-dx-volume-driver.xml:360(para)
msgid "Volume type names <literal>High Performance</literal> and <literal>Standard Performance</literal> are user-defined and can be any names. Extra spec keys <literal>storagetype:pool</literal> and <literal>storagetype:provisioning</literal> have to be the exact names listed here. Extra spec value <literal>smis_pool </literal> is your pool name. The extra spec value for <literal>storagetype:provisioning</literal> has to be either <literal>thick</literal> or <literal>thin</literal>. The driver will look for a volume type first. If the volume type is specified when creating a volume, the driver will look for the volume type definition and find the matching pool and provisioning type. If the volume type is not specified, it will fall back to use the <literal>StorageType</literal> tag in <filename> cinder_fujitsu_eternus_dx.xml</filename>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:9(title)
msgid "Hitachi storage volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:10(para)
msgid "Hitachi storage volume driver provides iSCSI and Fibre Channel support for Hitachi storages."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:14(para)
msgid "Supported storages:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:17(para)
msgid "Hitachi Virtual Storage Platform G1000 (VSP G1000)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:20(para)
msgid "Hitachi Virtual Storage Platform (VSP)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:23(para)
msgid "Hitachi Unified Storage VM (HUS VM)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:26(para)
msgid "Hitachi Unified Storage 100 Family (HUS 100 Family)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:29(para)
msgid "Required software:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:32(para)
msgid "RAID Manager Ver 01-32-03/01 or later for VSP G1000/VSP/HUS VM"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:35(para)
msgid "Hitachi Storage Navigator Modular 2 (HSNM2) Ver 27.50 or later for HUS 100 Family"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:39(para)
msgid "HSNM2 needs to be installed under <filename>/usr/stonavm.</filename>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:41(para)
msgid "Required licenses:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:44(para)
msgid "Hitachi In-System Replication Software for VSP G1000/VSP/HUS VM"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:47(para)
msgid "(Mandatory) ShadowImage in-system replication for HUS 100 Family"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:50(para)
msgid "(Optional) Copy-on-Write Snapshot for HUS 100 Family"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:53(para)
msgid "Additionaly, the <application>pexpect</application> package is required."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:59(para)
msgid "Create, delete, attach and detach volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:62(para)
msgid "Create, list and delete volume snapshots."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:87(title)
msgid "Set up Hitachi storage"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:88(para)
msgid "You need to specify settings as described below. For details about each step, see the user's guide of the storage device. Use a storage administrative software such as Storage Navigator to set up the storage device so that LDEVs and host groups can be created and deleted, and LDEVs can be connected to the server and can be asynchronously copied."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:91(para)
msgid "Create a Dynamic Provisioning pool."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:94(para)
msgid "Connect the ports at the storage to the Controller node and Compute nodes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:97(para)
msgid "For VSP G1000/VSP/HUS VM, set \"port security\" to \"enable\" for the ports at the storage."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:100(para)
msgid "For HUS 100 Family, set \"Host Group security\"/\"iSCSI target security\" to \"ON\" for the ports at the storage."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:103(para)
msgid "For the ports at the storage, create host groups (iSCSI targets) whose names begin with HBSD- for the Controller node and each Compute node. Then register a WWN (initiator IQN) for each of the Controller node and Compute nodes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:106(para)
msgid "For VSP G1000/VSP/HUS VM, perform the following:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:109(para)
msgid "Create a storage device account belonging to the Administrator User Group. (To use multiple storage devices, create the same account name for all the target storage devices, and specify the same resource group and permissions.)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:113(para)
msgid "Create a command device (In-Band), and set user authentication to ON."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:116(para)
msgid "Register the created command device to the host group for the Controller node."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:119(para)
msgid "To use the Thin Image function, create a pool for Thin Image."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:124(para)
msgid "For HUS 100 Family, perform the following:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:127(para)
msgid "Use the command <placeholder-1/> to register the unit name and controller of the storage device to HSNM2."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:130(para)
msgid "When connecting via iSCSI, if you are using CHAP certification, specify the same user and password as that used for the storage port."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:137(title)
msgid "Set up Hitachi Gigabit Fibre Channel adaptor"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:138(para)
msgid "Change a parameter of the hfcldd driver and update the initram file if Hitachi Gigabit Fibre Channel adaptor is used."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:142(replaceable) ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:142(replaceable)
msgid "KERNEL_VERSION"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:146(title)
msgid "Set up Hitachi storage volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:149(para)
msgid "Create directory."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:154(para)
msgid "Create \"volume type\" and \"volume key\"."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:155(para)
msgid "This example shows that HUS100_SAMPLE is created as \"volume type\" and hus100_backend is registered as \"volume key\"."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:161(para)
msgid "Please specify any identical \"volume type\" name and \"volume key\"."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:163(para)
msgid "To confirm the created \"volume type\", please execute the following command:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:168(para)
msgid "Edit <filename>/etc/cinder/cinder.conf</filename> as follows."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:169(para)
msgid "If you use Fibre Channel:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:171(para)
msgid "If you use iSCSI:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:173(para)
msgid "Also, set <option>volume_backend_name</option> created by <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:175(para)
msgid "This table shows configuration options for Hitachi storage volume driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:179(para)
msgid "Restart Block Storage service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hitachi-storage-volume-driver.xml:181(para)
msgid "When the startup is done, \"MSGID0003-I: The storage backend can be used.\" is output into <filename>/var/log/cinder/volume.log</filename> as follows."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:6(title)
msgid "ProphetStor Fibre Channel and iSCSI drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:7(para)
msgid "ProhetStor Fibre Channel and iSCSI drivers add support for ProphetStor Flexvisor through OpenStack Block Storage. ProphetStor Flexvisor enables commodity x86 hardware as software-defined storage leveraging well-proven ZFS for disk management to provide enterprise grade storage services such as snapshots, data protection with different RAID levels, replication, and deduplication."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:13(para)
msgid "The <literal>DPLFCDriver</literal> and <literal>DPLISCSIDriver</literal> drivers run volume operations by communicating with the ProphetStor storage system over HTTPS."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:44(title)
msgid "Enable the Fibre Channel or iSCSI drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:45(para)
msgid "The <literal>DPLFCDriver</literal> and <literal>DPLISCSIDriver</literal> are installed with the OpenStack software."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:50(para)
msgid "Query storage pool id for configure <literal>dpl_pool</literal> of the <filename>cinder.conf</filename>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:54(para)
msgid "Logon onto the storage system with administrator access."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:56(replaceable) ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:77(replaceable)
msgid "STORAGE IP ADDRESS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:59(para)
msgid "View the current usable pool id."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:64(para)
msgid "Use <literal>d5bd40b58ea84e9da09dcf25a01fdc07</literal> to config the <literal>dpl_pool</literal> of <filename>/etc/cinder/cinder.conf</filename>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:69(para)
msgid "Other management command can reference by command help <placeholder-1/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:73(para)
msgid "Make the following changes on the volume node <filename>/etc/cinder/cinder.conf</filename> file."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:80(replaceable) ./doc/config-reference/compute/section_compute-cells.xml:316(replaceable)
msgid "USERNAME"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:83(replaceable) ./doc/config-reference/compute/section_compute-cells.xml:316(replaceable)
msgid "PASSWORD"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:104(para)
msgid "Save the changes to the <filename>/etc/cinder/cinder.conf</filename> file and restart the <systemitem class=\"service\">cinder-volume</systemitem> service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:110(para)
msgid "The ProphetStor Fibre Channel or iSCSI drivers are now enabled on your OpenStack system. If you experience problems, review the Block Storage service log files for errors."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/prophetstor-dpl-driver.xml:114(para)
msgid "The following table contains the options supported by the ProphetStor storage driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/glusterfs-driver.xml:6(title)
msgid "GlusterFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/glusterfs-driver.xml:7(para)
msgid "GlusterFS is an open-source scalable distributed file system that is able to grow to petabytes and beyond in size. More information can be found on <link href=\"http://www.gluster.org/\">Gluster's homepage</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/glusterfs-driver.xml:12(para)
msgid "This driver enables the use of GlusterFS in a similar fashion as NFS. It supports basic volume operations, including snapshot/clone."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/glusterfs-driver.xml:16(para)
msgid "You must use a Linux kernel of version 3.4 or greater (or version 2.6.32 or greater in Red Hat Enterprise Linux/CentOS 6.3+) when working with Gluster-based volumes. See <link href=\"https://bugs.launchpad.net/nova/+bug/1177103\">Bug 1177103</link> for more information."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/glusterfs-driver.xml:22(para)
msgid "To use Block Storage with GlusterFS, first set the <literal>volume_driver</literal> in <filename>cinder.conf</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/glusterfs-driver.xml:26(para)
msgid "The following table contains the configuration options supported by the GlusterFS driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:4(title)
msgid "NFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:5(para)
msgid "The Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984. An NFS server <emphasis>exports</emphasis> one or more of its file systems, known as <emphasis>shares</emphasis>. An NFS client can mount these exported shares on its own file system. You can perform file actions on this mounted remote file system as if the file system were local."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:13(title)
msgid "How the NFS driver works"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:14(para)
msgid "The NFS driver, and other drivers based on it, work quite differently than a traditional block storage driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:17(para)
msgid "The NFS driver does not actually allow an instance to access a storage device at the block level. Instead, files are created on an NFS share and mapped to instances, which emulates a block device. This works in a similar way to QEMU, which stores instances in the <filename>/var/lib/nova/instances</filename> directory."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:26(title)
msgid "Enable the NFS driver and related options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:27(para)
msgid "To use Cinder with the NFS driver, first set the <literal>volume_driver</literal> in <filename>cinder.conf</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:31(para)
msgid "The following table contains the options supported by the NFS driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:36(para)
msgid "As of the Icehouse release, the NFS driver (and other drivers based off it) will attempt to mount shares using version 4.1 of the NFS protocol (including pNFS). If the mount attempt is unsuccessful due to a lack of client or server support, a subsequent mount attempt that requests the default behavior of the <placeholder-1/> command will be performed. On most distributions, the default behavior is to attempt mounting first with NFS v4.0, then silently fall back to NFS v3.0 if necessary. If the <option>nfs_mount_options</option> configuration option contains a request for a specific version of NFS to be used, or if specific options are specified in the shares configuration file specified by the <option>nfs_shares_config</option> configuration option, the mount will be attempted as requested with no subsequent attempts."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:55(title)
msgid "How to use the NFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:58(para)
msgid "Access to one or more NFS servers. Creating an NFS server is outside the scope of this document. This example assumes access to the following NFS servers and mount points:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:64(literal)
msgid "192.168.1.200:/storage"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:67(literal)
msgid "192.168.1.201:/storage"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:70(literal)
msgid "192.168.1.202:/storage"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:73(para)
msgid "This example demonstrates the use of with this driver with multiple NFS servers. Multiple servers are not required. One is usually enough."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:78(para)
msgid "Add your list of NFS servers to the file you specified with the <literal>nfs_shares_config</literal> option. For example, if the value of this option was set to <literal>/etc/cinder/shares.txt</literal>, then:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:88(para)
msgid "Comments are allowed in this file. They begin with a <literal>#</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:92(para)
msgid "Configure the <literal>nfs_mount_point_base</literal> option. This is a directory where <systemitem class=\"service\">cinder-volume</systemitem> mounts all NFS shares stored in <literal>shares.txt</literal>. For this example, <literal>/var/lib/cinder/nfs</literal> is used. You can, of course, use the default value of <literal>$state_path/mnt</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:103(para)
msgid "Start the <systemitem class=\"service\">cinder-volume</systemitem> service. <literal>/var/lib/cinder/nfs</literal> should now contain a directory for each NFS share specified in <literal>shares.txt</literal>. The name of each directory is a hashed name:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:115(para)
msgid "You can now create volumes as you normally would:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:120(para)
msgid "This volume can also be attached and deleted just like other volumes. However, snapshotting is <emphasis>not</emphasis> supported."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:127(title)
msgid "NFS driver notes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:130(para)
msgid "<systemitem class=\"service\">cinder-volume</systemitem> manages the mounting of the NFS shares as well as volume creation on the shares. Keep this in mind when planning your OpenStack architecture. If you have one master NFS server, it might make sense to only have one <systemitem class=\"service\">cinder-volume</systemitem> service to handle all requests to that NFS server. However, if that single server is unable to handle all requests, more than one <systemitem class=\"service\">cinder-volume</systemitem> service is needed as well as potentially more than one NFS server."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:146(para)
msgid "Because data is stored in a file and not actually on a block storage device, you might not see the same IO performance as you would with a traditional block storage driver. Please test accordingly."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:153(para)
msgid "Despite possible IO performance loss, having volume data stored in a file might be beneficial. For example, backing up volumes can be as easy as copying the volume files."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml:158(para)
msgid "Regular IO flushing and syncing still stands."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:6(title)
msgid "EMC VNX direct driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:7(para)
msgid "<literal>EMC VNX direct driver</literal> (consists of <literal>EMCCLIISCSIDriver</literal> and <literal>EMCCLIFCDriver</literal>) supports both iSCSI and FC protocol. <literal>EMCCLIISCSIDriver</literal> (VNX iSCSI direct driver) and <literal>EMCCLIFCDriver</literal> (VNX FC direct driver) are separately based on the <literal>ISCSIDriver</literal> and <literal>FCDriver</literal> defined in Block Storage."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:14(para)
msgid "<literal>EMCCLIISCSIDriver</literal> and <literal>EMCCLIFCDriver</literal> perform the volume operations by executing Navisphere CLI (NaviSecCLI) which is a command line interface used for management, diagnostics and reporting functions for VNX."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:19(title)
msgid "Supported OpenStack release"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:20(para)
msgid "<literal>EMC VNX direct driver</literal> supports the Juno release."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:26(para)
msgid "VNX Operational Environment for Block version 5.32 or higher."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:30(para)
msgid "VNX Snapshot and Thin Provisioning license should be activated for VNX."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:34(para)
msgid "Navisphere CLI v7.32 or higher is installed along with the driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:61(para)
msgid "Migrate a volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:70(para)
msgid "Create and delete consistency groups."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:73(para)
msgid "Create, list, and delete consistency group snapshots."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:78(title)
msgid "Preparation"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:79(para)
msgid "This section contains instructions to prepare the Block Storage nodes to use the EMC VNX direct driver. You install the Navisphere CLI, install the driver, ensure you have correct zoning configurations, and register the driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:84(title)
msgid "Install NaviSecCLI"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:85(para)
msgid "Navisphere CLI needs to be installed on all Block Storage nodes within an OpenStack deployment."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:89(para)
msgid "For Ubuntu x64, DEB is available at <link href=\"https://github.com/emc-openstack/naviseccli\">EMC OpenStack Github</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:94(para)
msgid "For all other variants of Linux, Navisphere CLI is available at <link href=\"https://support.emc.com/downloads/36656_VNX2-Series\"> Downloads for VNX2 Series</link> or <link href=\"https://support.emc.com/downloads/12781_VNX1-Series\"> Downloads for VNX1 Series</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:101(para)
msgid "After installation, set the security level of Navisphere CLI to low:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:108(title)
msgid "Install Block Storage driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:109(para)
msgid "Both <literal>EMCCLIISCSIDriver</literal> and <literal>EMCCLIFCDriver</literal> are provided in the installer package:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:114(filename)
msgid "emc_vnx_cli.py"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:117(para)
msgid "<filename>emc_cli_fc.py</filename> (for <option>EMCCLIFCDriver</option>)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:121(para)
msgid "<filename>emc_cli_iscsi.py</filename> (for <option>EMCCLIISCSIDriver</option>)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:125(para)
msgid "Copy the files above to the <filename>cinder/volume/drivers/emc/</filename> directory of the OpenStack node(s) where <systemitem class=\"service\">cinder-volume</systemitem> is running."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:130(title)
msgid "FC zoning with VNX (<literal>EMCCLIFCDriver</literal> only)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:131(para)
msgid "A storage administrator must enable FC SAN auto zoning between all OpenStack nodes and VNX if FC SAN auto zoning is not enabled."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:135(title)
msgid "Register with VNX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:136(para)
msgid "Register the compute nodes with VNX to access the storage in VNX or enable initiator auto registration."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:138(para)
msgid "To perform \"Copy Image to Volume\" and \"Copy Volume to Image\" operations, the nodes running the <systemitem class=\"service\">cinder-volume</systemitem> service(Block Storage nodes) must be registered with the VNX as well."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:141(para)
msgid "Steps mentioned below are for a compute node. Please follow the same steps for the Block Storage nodes also. The steps can be skipped if initiator auto registration is enabled."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:145(title)
msgid "EMCCLIFCDriver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:146(para)
msgid "Steps for <literal>EMCCLIFCDriver</literal>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:149(para)
msgid "Assume <literal>20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2</literal> is the WWN of a FC initiator port name of the compute node whose hostname and IP are <literal>myhost1</literal> and <literal>10.10.61.1</literal>. Register <literal>20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2</literal> in Unisphere:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:156(para)
msgid "Login to Unisphere, go to <guibutton>FNM0000000000-&gt;Hosts-&gt;Initiators</guibutton>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:159(para)
msgid "Refresh and wait until the initiator <literal> 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2</literal> with SP Port <literal>A-1</literal> appears."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:162(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:224(para)
msgid "Click the <guibutton>Register</guibutton> button, select <guilabel>CLARiiON/VNX</guilabel> and enter the hostname and IP address:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:167(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:229(para)
msgid "Hostname : <literal>myhost1</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:170(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:232(para)
msgid "IP : <literal>10.10.61.1</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:173(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:235(para)
msgid "Click <guibutton>Register</guibutton>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:177(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:239(para)
msgid "Then host <literal>10.10.61.1</literal> will appear under <guibutton>Hosts-&gt;Host List</guibutton> as well."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:182(para)
msgid "Register the wwn with more ports if needed."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:186(title)
msgid "EMCCLIISCSIDriver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:187(para)
msgid "Steps for <literal>EMCCLIISCSIDriver</literal>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:189(para)
msgid "On the compute node with IP address <literal>10.10.61.1</literal> and hostname <literal>myhost1</literal>, execute the following commands (assuming <literal>10.10.61.35</literal> is the iSCSI target):"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:194(para)
msgid "Start the iSCSI initiator service on the node"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:197(para)
msgid "Discover the iSCSI target portals on VNX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:200(para)
msgid "Enter <filename>/etc/iscsi</filename>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:203(para)
msgid "Find out the iqn of the node"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:208(para)
msgid "Login to VNX from the compute node using the target corresponding to the SPA port:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:213(para)
msgid "Assume <literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal> is the initiator name of the compute node. Register <literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal> in Unisphere:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:218(para)
msgid "Login to Unisphere, go to <guibutton>FNM0000000000-&gt;Hosts-&gt;Initiators </guibutton>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:221(para)
msgid "Refresh and wait until the initiator <literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal> with SP Port <literal>A-8v0</literal> appears."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:244(para) ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:255(para)
msgid "Logout iSCSI on the node:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:248(para)
msgid "Login to VNX from the compute node using the target corresponding to the SPB port:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:253(para)
msgid "In Unisphere register the initiator with the SPB port."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:259(para)
msgid "Register the iqn with more ports if needed."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:265(title)
msgid "Backend configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:266(para)
msgid "Make the following changes in the <filename>/etc/cinder/cinder.conf</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:296(para)
msgid "where <literal>san_ip</literal> is one of the SP IP addresses of the VNX array and <literal>san_secondary_ip</literal> is the other SP IP address of VNX array. <literal>san_secondary_ip</literal> is an optional field, and it serves the purpose of providing a high availability(HA) design. In case that one SP is down, the other SP can be connected automatically. <literal>san_ip</literal> is a mandatory field, which provides the main connection."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:305(para)
msgid "where <literal>Pool_01_SAS</literal> is the pool from which the user wants to create volumes. The pools can be created using Unisphere for VNX. Refer to the <xref linkend=\"emc-vnx-direct-multipool\"/> on how to manage multiple pools."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:311(para)
msgid "where <literal>storage_vnx_security_file_dir</literal> is the directory path of the VNX security file. Make sure the security file is generated following the steps in <xref linkend=\"emc-vnx-direct-auth\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:317(para)
msgid "where <literal>iscsi_initiators</literal> is a dictionary of IP addresses of the iSCSI initiator ports on all OpenStack nodes which want to connect to VNX via iSCSI. If this option is configured, the driver will leverage this information to find an accessible iSCSI target portal for the initiator when attaching volume. Otherwise, the iSCSI target portal will be chosen in a relative random way."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:325(para)
msgid "Restart <systemitem class=\"service\">cinder-volume</systemitem> service to make the configuration change take effect."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:331(title)
msgid "Authentication"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:332(para)
msgid "VNX credentials are necessary when the driver connects to the VNX system. Credentials in global, local and ldap scopes are supported. There are two approaches to provide the credentials."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:335(para)
msgid "The recommended one is using the Navisphere CLI security file to provide the credentials which can get rid of providing the plain text credentials in the configuration file. Following is the instruction on how to do this."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:340(para)
msgid "Find out the linux user id of the <filename>/usr/bin/cinder-volume</filename> processes. Assuming the service <filename>/usr/bin/cinder-volume</filename> is running by account <literal>cinder</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:345(para)
msgid "Switch to <literal>root</literal> account"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:347(para)
msgid "Change <literal>cinder:x:113:120::/var/lib/cinder:/bin/false</literal> to <literal>cinder:x:113:120::/var/lib/cinder:/bin/bash</literal> in <filename>/etc/passwd</filename> (This temporary change is to make step 4 work)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:352(para)
msgid "Save the credentials on behalf of <literal>cinder</literal> user to a security file (assuming the array credentials are <literal>admin/admin</literal> in <literal>global</literal> scope). In below command, switch <literal>-secfilepath</literal> is used to specify the location to save the security file (assuming saving to directory <filename>/etc/secfile/array1</filename>)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:359(para)
msgid "Save the security file to the different locations for different arrays except where the same credentials are shared between all arrays managed by the host. Otherwise, the credentials in the security file will be overwritten. If <literal>-secfilepath</literal> is not specified in the command above, the security file will be saved to the default location which is the home directory of the executor."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:366(para)
msgid "Change <literal>cinder:x:113:120::/var/lib/cinder:/bin/bash</literal> back to <literal>cinder:x:113:120::/var/lib/cinder:/bin/false</literal> in <filename>/etc/passwd</filename>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:369(para)
msgid "Remove the credentials options <literal>san_login</literal>, <literal>san_password</literal> and <literal>storage_vnx_authentication_type</literal> from <filename>cinder.conf</filename> (normally it is <filename>/etc/cinder/cinder.conf</filename>). Add the option <literal>storage_vnx_security_file_dir</literal> and set its value to the directory path supplied with switch <literal>-secfilepath</literal> in step 4. Omit this option if <literal>-secfilepath</literal> is not used in step 4."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:380(para)
msgid "Restart <systemitem class=\"service\">cinder-volume</systemitem> service to make the change take effect."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:383(para)
msgid "Alternatively, the credentials can be specified in <filename>/etc/cinder/cinder.conf</filename> through the three options below:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:394(title)
msgid "Restriction of deployment"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:395(para)
msgid "It does not suggest to deploy the driver on a compute node if <literal>cinder upload-to-image --force True</literal> is used against an in-use volume. Otherwise, <literal>cinder upload-to-image --force True</literal> will terminate the vm instance's data access to the volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:402(title)
msgid "Restriction of volume extension"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:403(para)
msgid "VNX does not support to extend the thick volume which has a snapshot. If the user tries to extend a volume which has a snapshot, the volume's status would change to <literal>error_extending</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:409(title)
msgid "Provisioning type (thin, thick, deduplicated and compressed)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:410(para)
msgid "User can specify extra spec key <literal>storagetype:provisioning</literal> in volume type to set the provisioning type of a volume. The provisioning type can be <literal>thick</literal>, <literal>thin</literal>, <literal>deduplicated</literal> or <literal>compressed</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:416(para)
msgid "<literal>thick</literal> provisioning type means the volume is fully provisioned."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:420(para)
msgid "<literal>thin</literal> provisioning type means the volume is virtually provisioned."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:424(para)
msgid "<literal>deduplicated</literal> provisioning type means the volume is virtually provisioned and the deduplication is enabled on it. Administrator shall go to VNX to configure the system level deduplication settings. To create a deduplicated volume, the VNX deduplication license should be activated on VNX first, and use key <literal>deduplication_support=True</literal> to let Block Storage scheduler find a volume backend which manages a VNX with deduplication license activated."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:434(para)
msgid "<literal>compressed</literal> provisioning type means the volume is virtually provisioned and the compression is enabled on it. Administrator shall go to the VNX to configure the system level compression settings. To create a compressed volume, the VNX compression license should be activated on VNX first, and the user should specify key <literal>compression_support=True</literal> to let Block Storage scheduler find a volume backend which manages a VNX with compression license activated. VNX does not support to create a snapshot on a compressed volume. If the user tries to create a snapshot on a compressed volume, the operation would fail and OpenStack would show the new snapshot in error state."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:446(para)
msgid "Here is an example about how to create a volume with provisioning type. Firstly create a volume type and specify storage pool in the extra spec, then create a volume with this volume type:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:457(para)
msgid "In the example above, four volume types are created: <literal>ThickVolume</literal>, <literal>ThinVolume</literal>, <literal>DeduplicatedVolume</literal> and <literal>CompressedVolume</literal>. For <literal>ThickVolume</literal>, <literal>storagetype:provisioning</literal> is set to <literal>thick</literal>. Similarly for other volume types. If <literal>storagetype:provisioning</literal> is not specified or an invalid value, the default value <literal>thick</literal> is adopted."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:464(para)
msgid "Volume type name, such as <literal>ThickVolume</literal>, is user-defined and can be any name. Extra spec key <literal>storagetype:provisioning</literal> shall be the exact name listed here. Extra spec value for <literal>storagetype:provisioning</literal> shall be <literal>thick</literal>, <literal>thin</literal>, <literal>deduplicated</literal> or <literal>compressed</literal>. During volume creation, if the driver finds <literal>storagetype:provisioning</literal> in the extra spec of the volume type, it will create the volume with the provisioning type accordingly. Otherwise, the volume will be thick as the default."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:475(title)
msgid "Fully automated storage tiering support"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:476(para)
msgid "VNX supports Fully automated storage tiering which requires the FAST license activated on the VNX. The OpenStack administrator can use the extra spec key <literal>storagetype:tiering</literal> to set the tiering policy of a volume and use the extra spec key <literal>fast_support=True</literal> to let Block Storage scheduler find a volume backend which manages a VNX with FAST license activated. Here are the five supported values for the extra spec key <literal>storagetype:tiering</literal>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:486(para)
msgid "<literal>StartHighThenAuto</literal> (Default option)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:489(literal)
msgid "Auto"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:492(literal)
msgid "HighestAvailable"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:495(literal)
msgid "LowestAvailable"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:498(literal)
msgid "NoMovement"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:501(para)
msgid "Tiering policy can not be set for a deduplicated volume. The user can check storage pool properties on VNX to know the tiering policy of a deduplicated volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:504(para)
msgid "Here is an example about how to create a volume with tiering policy:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:511(title)
msgid "FAST Cache support"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:512(para)
msgid "VNX has FAST Cache feature which requires the FAST Cache license activated on the VNX. The OpenStack administrator can use the extra spec key <literal>fast_cache_enabled</literal> to choose whether to create a volume on the volume backend which manages a pool with FAST Cache enabled. This feature is only supported by pool-based backend (Refer to <xref linkend=\"emc-vnx-direct-multipool\"/>). The value of the extra spec key <literal>fast_cache_enabled</literal> is either <literal>True</literal> or <literal>False</literal>. When creating a volume, if the key <literal>fast_cache_enabled</literal> is set to <literal>True</literal> in the volume type, the volume will be created by a pool-based backend which manages a pool with FAST Cache enabled."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:525(title)
msgid "Storage group automatic deletion"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:526(para)
msgid "For volume attaching, the driver has a storage group on VNX for each compute node hosting the vm instances that are going to consume VNX Block Storage (using the compute node's hostname as the storage group's name). All the volumes attched to the vm instances in a computer node will be put into the corresponding Storage Group. If <literal>destroy_empty_storage_group=True</literal>, the driver will remove the empty storage group when its last volume is detached. For data safety, it does not suggest to set the option <literal>destroy_empty_storage_group=True</literal> unless the VNX is exclusively managed by one Block Storage node because consistent <literal>lock_path</literal> is required for operation synchronization for this behavior."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:540(title)
msgid "EMC storage-assisted volume migration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:541(para)
msgid "<literal>EMC VNX direct driver</literal> supports storage-assisted volume migration, when the user starts migrating with <literal>cinder migrate --force-host-copy False volume_id host</literal> or <literal>cinder migrate volume_id host</literal>, cinder will try to leverage the VNX's native volume migration functionality."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:546(para)
msgid "In the following scenarios, VNX native volume migration will not be triggered:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:550(para)
msgid "Volume migration between backends with different storage protocol, ex, FC and iSCSI."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:554(para)
msgid "Volume migration from pool-based backend to array-based backend."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:558(para)
msgid "Volume is being migrated across arrays."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:563(title)
msgid "Initiator auto registration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:564(para)
msgid "If <literal>initiator_auto_registration=True</literal>, the driver will automatically register iSCSI initiators with all working iSCSI target ports on the VNX array during volume attaching (The driver will skip those initiators that have already been registered)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:568(para)
msgid "If the user wants to register the initiators with some specific ports on VNX but not register with the other ports, this functionality should be disabled."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:573(title)
msgid "Read-only volumes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:574(para)
msgid "OpenStack supports read-only volumes. Either of the following commands can be used to set a volume to read-only."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:578(para)
msgid "After a volume is marked as read-only, the driver will forward the information when a hypervisor is attaching the volume and the hypervisor will have an implementation-specific way to make sure the volume is not written."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:584(title)
msgid "Multiple pools support"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:585(para)
msgid "Normally a storage pool is configured for a Block Storage backend (named as pool-based backend), so that only that storage pool will be used by that Block Storage backend."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:588(para)
msgid "If <literal>storage_vnx_pool_name</literal> is not given in the configuration file, the driver will allow user to use the extra spec key <literal>storagetype:pool</literal> in the volume type to specify the storage pool for volume creation. If <literal>storagetype:pool</literal> is not specified in the volume type and <literal>storage_vnx_pool_name</literal> is not found in the configuration file, the driver will randomly choose a pool to create the volume. This kind of Block Storage backend is named as array-based backend."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:596(para)
msgid "Here is an example about configuration of array-based backend:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:606(para)
msgid "In this configuration, if the user wants to create a volume on a certain storage pool, a volume type with a extra spec specified the storage pool should be created first, then the user can use this volume type to create the volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:610(para)
msgid "Here is an example about creating the volume type:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:614(para)
msgid "Multiple pool support is still an experimental workaround before blueprint pool-aware-cinder-scheduler is introduced. It is NOT recommended to enable this feature since Juno just supports pool-aware-cinder-scheduler. In later driver update, the driver side change which cooperates with pool-aware-cinder-scheduler will be introduced."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:621(title)
msgid "FC SAN auto zoning"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:622(para)
msgid "EMC direct driver supports FC SAN auto zoning when ZoneManager is configured. Set <literal>zoning_mode</literal> to <literal>fabric</literal> in backend configuration section to enable this feature. For ZoneManager configuration, please refer to <xref linkend=\"section_fc-zoning\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:629(title)
msgid "Multi-backend configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml:663(para)
msgid "For more details on multi-backend, see <link href=\"http://docs.openstack.org/admin-guide-cloud/content/multi_backend.html\">OpenStack Cloud Administration Guide</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:5(title)
msgid "IBM Storwize family and SVC volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:6(para)
msgid "The volume management driver for Storwize family and SAN Volume Controller (SVC) provides OpenStack Compute instances with access to IBM Storwize family or SVC storage systems."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:11(title)
msgid "Configure the Storwize family and SVC system"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:13(title) ./doc/config-reference/compute/section_hypervisor_vmware.xml:263(td)
msgid "Network configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:14(para)
msgid "The Storwize family or SVC system must be configured for iSCSI, Fibre Channel, or both."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:16(para)
msgid "If using iSCSI, each Storwize family or SVC node should have at least one iSCSI IP address. The IBM Storwize/SVC driver uses an iSCSI IP address associated with the volume's preferred node (if available) to attach the volume to the instance, otherwise it uses the first available iSCSI IP address of the system. The driver obtains the iSCSI IP address directly from the storage system; you do not need to provide these iSCSI IP addresses directly to the driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:27(para)
msgid "If using iSCSI, ensure that the compute nodes have iSCSI network access to the Storwize family or SVC system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:32(para)
msgid "OpenStack Nova's Grizzly version supports iSCSI multipath. Once this is configured on the Nova host (outside the scope of this documentation), multipath is enabled."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:37(para)
msgid "If using Fibre Channel (FC), each Storwize family or SVC node should have at least one WWPN port configured. If the <literal>storwize_svc_multipath_enabled</literal> flag is set to True in the Cinder configuration file, the driver uses all available WWPNs to attach the volume to the instance (details about the configuration flags appear in the <link linkend=\"ibm-storwize-svc-driver2\"> next section</link>). If the flag is not set, the driver uses the WWPN associated with the volume's preferred node (if available), otherwise it uses the first available WWPN of the system. The driver obtains the WWPNs directly from the storage system; you do not need to provide these WWPNs directly to the driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:54(para)
msgid "If using FC, ensure that the compute nodes have FC connectivity to the Storwize family or SVC system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:60(title)
msgid "iSCSI CHAP authentication"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:75(para)
msgid "Not all OpenStack Compute drivers support CHAP authentication. Please check compatibility before using."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:61(para)
msgid "If using iSCSI for data access and the <literal>storwize_svc_iscsi_chap_enabled</literal> is set to <literal>True</literal>, the driver will associate randomly-generated CHAP secrets with all hosts on the Storwize family system. OpenStack compute nodes use these secrets when creating iSCSI connections. <placeholder-1/><placeholder-2/><placeholder-3/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:87(title)
msgid "Configure storage pools"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:88(para)
msgid "Each instance of the IBM Storwize/SVC driver allocates all volumes in a single pool. The pool should be created in advance and be provided to the driver using the <literal>storwize_svc_volpool_name</literal> configuration flag. Details about the configuration flags and how to provide the flags to the driver appear in the <link linkend=\"ibm-storwize-svc-driver2\"> next section</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:99(title)
msgid "Configure user authentication for the driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:101(para)
msgid "The driver requires access to the Storwize family or SVC system management interface. The driver communicates with the management using SSH. The driver should be provided with the Storwize family or SVC management IP using the <literal>san_ip</literal> flag, and the management port should be provided by the <literal>san_ssh_port</literal> flag. By default, the port value is configured to be port 22 (SSH)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:111(para)
msgid "Make sure the compute node running the <systemitem class=\"service\">cinder-volume</systemitem> management driver has SSH network access to the storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:118(para)
msgid "To allow the driver to communicate with the Storwize family or SVC system, you must provide the driver with a user on the storage system. The driver has two authentication methods: password-based authentication and SSH key pair authentication. The user should have an Administrator role. It is suggested to create a new user for the management driver. Please consult with your storage and security administrator regarding the preferred authentication method and how passwords or SSH keys should be stored in a secure manner."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:129(para)
msgid "When creating a new user on the Storwize or SVC system, make sure the user belongs to the Administrator group or to another group that has an Administrator role."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:134(para)
msgid "If using password authentication, assign a password to the user on the Storwize or SVC system. The driver configuration flags for the user and password are <literal>san_login</literal> and <literal>san_password</literal>, respectively."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:140(para)
msgid "If you are using the SSH key pair authentication, create SSH private and public keys using the instructions below or by any other method. Associate the public key with the user by uploading the public key: select the \"choose file\" option in the Storwize family or SVC management GUI under \"SSH public key\". Alternatively, you may associate the SSH public key using the command line interface; details can be found in the Storwize and SVC documentation. The private key should be provided to the driver using the <literal>san_private_key</literal> configuration flag."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:154(title)
msgid "Create a SSH key pair with OpenSSH"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:155(para)
msgid "You can create an SSH key pair using OpenSSH, by running:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:158(para)
msgid "The command prompts for a file to save the key pair. For example, if you select 'key' as the filename, two files are created: <literal>key</literal> and <literal>key.pub</literal>. The <literal>key</literal> file holds the private SSH key and <literal>key.pub</literal> holds the public SSH key."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:165(para)
msgid "The command also prompts for a pass phrase, which should be empty."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:167(para)
msgid "The private key file should be provided to the driver using the <literal>san_private_key</literal> configuration flag. The public key should be uploaded to the Storwize family or SVC system using the storage management GUI or command line interface."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:173(para)
msgid "Ensure that Cinder has read permissions on the private key file."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:179(title)
msgid "Configure the Storwize family and SVC driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:181(title)
msgid "Enable the Storwize family and SVC driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:182(para)
msgid "Set the volume driver to the Storwize family and SVC driver by setting the <literal>volume_driver</literal> option in <filename>cinder.conf</filename> as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:189(title)
msgid "Storwize family and SVC driver options in cinder.conf"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:191(para)
msgid "The following options specify default values for all volumes. Some can be over-ridden using volume types, which are described below."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:195(caption)
msgid "List of configuration flags for Storwize storage and SVC driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:211(literal)
msgid "san_ip"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:215(para)
msgid "Management IP or host name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:219(literal)
msgid "san_ssh_port"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:222(para)
msgid "22"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:223(para)
msgid "Management port"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:226(literal)
msgid "san_login"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:230(para)
msgid "Management login username"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:234(literal)
msgid "san_password"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:238(para)
msgid "The authentication requires either a password (<literal>san_password</literal>) or SSH private key (<literal>san_private_key</literal>). One must be specified. If both are specified, the driver uses only the SSH private key."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:236(para)
msgid "Required <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:248(para)
msgid "Management login password"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:252(literal)
msgid "san_private_key"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:254(para)
msgid "Required <footnoteref linkend=\"storwize-svc-fn1\"/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:258(para)
msgid "Management login SSH private key"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:264(literal)
msgid "storwize_svc_volpool_name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:268(para)
msgid "Default pool name for volumes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:273(literal)
msgid "storwize_svc_vol_rsize"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:276(para)
msgid "2"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:280(para)
msgid "The driver creates thin-provisioned volumes by default. The <literal>storwize_svc_vol_rsize</literal> flag defines the initial physical allocation percentage for thin-provisioned volumes, or if set to <literal>-1</literal>, the driver creates full allocated volumes. More details about the available options are available in the Storwize family and SVC documentation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:277(para)
msgid "Initial physical allocation (percentage) <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:297(literal)
msgid "storwize_svc_vol_warning"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:300(para)
msgid "0 (disabled)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:301(para)
msgid "Space allocation warning threshold (percentage) <footnoteref linkend=\"storwize-svc-fn3\"/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:308(literal)
msgid "storwize_svc_vol_autoexpand"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:311(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:367(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:427(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:453(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:405(para)
msgid "True"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:314(para)
msgid "Defines whether thin-provisioned volumes can be auto expanded by the storage system, a value of <literal>True</literal> means that auto expansion is enabled, a value of <literal>False</literal> disables auto expansion. Details about this option can be found in the <literal>autoexpand</literal> flag of the Storwize family and SVC command line interface <literal>mkvdisk</literal> command."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:312(para)
msgid "Enable or disable volume auto expand <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:332(literal)
msgid "storwize_svc_vol_grainsize"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:335(para)
msgid "256"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:336(para)
msgid "Volume grain size <footnoteref linkend=\"storwize-svc-fn3\"/> in KB"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:342(literal)
msgid "storwize_svc_vol_compression"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:346(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:437(para) ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:89(replaceable)
msgid "False"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:350(para)
msgid "Defines whether Real-time Compression is used for the volumes created with OpenStack. Details on Real-time Compression can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have compression enabled for this feature to work."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:347(para)
msgid "Enable or disable Real-time Compression <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:364(literal)
msgid "storwize_svc_vol_easytier"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:370(para)
msgid "Defines whether Easy Tier is used for the volumes created with OpenStack. Details on EasyTier can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have Easy Tier enabled for this feature to work."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:368(para)
msgid "Enable or disable Easy Tier <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:382(literal)
msgid "storwize_svc_vol_iogrp"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:386(para)
msgid "The I/O group in which to allocate vdisks"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:391(literal)
msgid "storwize_svc_flashcopy_timeout"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:395(para)
msgid "120"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:398(para)
msgid "The driver wait timeout threshold when creating an OpenStack snapshot. This is actually the maximum amount of time that the driver waits for the Storwize family or SVC system to prepare a new FlashCopy mapping. The driver accepts a maximum wait time of 600 seconds (10 minutes)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:396(para)
msgid "FlashCopy timeout threshold <placeholder-1/> (seconds)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:412(literal)
msgid "storwize_svc_connection_protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:417(para)
msgid "Connection protocol to use (currently supports 'iSCSI' or 'FC')"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:423(literal)
msgid "storwize_svc_iscsi_chap_enabled"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:428(para)
msgid "Configure CHAP authentication for iSCSI connections"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:433(literal)
msgid "storwize_svc_multipath_enabled"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:440(para)
msgid "Multipath for iSCSI connections requires no storage-side configuration and is enabled if the compute host has multipath configured."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:438(para)
msgid "Enable multipath for FC connections <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:449(literal)
msgid "storwize_svc_multihost_enabled"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:457(para)
msgid "This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:454(para)
msgid "Enable mapping vdisks to multiple hosts <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:478(title)
msgid "Placement with volume types"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:479(para)
msgid "The IBM Storwize/SVC driver exposes capabilities that can be added to the <literal>extra specs</literal> of volume types, and used by the filter scheduler to determine placement of new volumes. Make sure to prefix these keys with <literal>capabilities:</literal> to indicate that the scheduler should use them. The following <literal>extra specs</literal> are supported:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:490(para)
msgid "capabilities:volume_back-end_name - Specify a specific back-end where the volume should be created. The back-end name is a concatenation of the name of the IBM Storwize/SVC storage system as shown in <literal>lssystem</literal>, an underscore, and the name of the pool (mdisk group). For example: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:501(para)
msgid "capabilities:compression_support - Specify a back-end according to compression support. A value of <literal>True</literal> should be used to request a back-end that supports compression, and a value of <literal>False</literal> will request a back-end that does not support compression. If you do not have constraints on compression support, do not set this key. Note that specifying <literal>True</literal> does not enable compression; it only requests that the volume be placed on a back-end that supports compression. Example syntax: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:517(para)
msgid "capabilities:easytier_support - Similar semantics as the <literal>compression_support</literal> key, but for specifying according to support of the Easy Tier feature. Example syntax: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:525(para)
msgid "capabilities:storage_protocol - Specifies the connection protocol used to attach volumes of this type to instances. Legal values are <literal>iSCSI</literal> and <literal>FC</literal>. This <literal>extra specs</literal> value is used for both placement and setting the protocol used for this volume. In the example syntax, note &lt;in&gt; is used as opposed to &lt;is&gt; used in the previous examples. <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:540(title)
msgid "Configure per-volume creation options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:541(para)
msgid "Volume types can also be used to pass options to the IBM Storwize/SVC driver, which over-ride the default values set in the configuration file. Contrary to the previous examples where the \"capabilities\" scope was used to pass parameters to the Cinder scheduler, options can be passed to the IBM Storwize/SVC driver with the \"drivers\" scope."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:548(para)
msgid "The following <literal>extra specs</literal> keys are supported by the IBM Storwize/SVC driver:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:552(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:662(para)
msgid "rsize"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:555(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:665(para)
msgid "warning"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:558(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:668(para)
msgid "autoexpand"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:561(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:671(para)
msgid "grainsize"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:564(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:674(para)
msgid "compression"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:567(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:677(para)
msgid "easytier"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:570(para)
msgid "multipath"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:573(para) ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:680(para)
msgid "iogrp"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:576(para)
msgid "These keys have the same semantics as their counterparts in the configuration file. They are set similarly; for example, <literal>rsize=2</literal> or <literal>compression=False</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:582(title)
msgid "Example: Volume types"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:583(para)
msgid "In the following example, we create a volume type to specify a controller that supports iSCSI and compression, to use iSCSI when attaching the volume, and to enable compression:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:589(para)
msgid "We can then create a 50GB volume using this type:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:593(para)
msgid "Volume types can be used, for example, to provide users with different"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:597(para)
msgid "performance levels (such as, allocating entirely on an HDD tier, using Easy Tier for an HDD-SDD mix, or allocating entirely on an SSD tier)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:603(para)
msgid "resiliency levels (such as, allocating volumes in pools with different RAID levels)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:608(para)
msgid "features (such as, enabling/disabling Real-time Compression)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:615(title)
msgid "Operational notes for the Storwize family and SVC driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:618(title)
msgid "Migrate volumes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:619(para)
msgid "In the context of OpenStack Block Storage's volume migration feature, the IBM Storwize/SVC driver enables the storage's virtualization technology. When migrating a volume from one pool to another, the volume will appear in the destination pool almost immediately, while the storage moves the data in the background."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:627(para)
msgid "To enable this feature, both pools involved in a given volume migration must have the same values for <literal>extent_size</literal>. If the pools have different values for <literal>extent_size</literal>, the data will still be moved directly between the pools (not host-side copy), but the operation will be synchronous."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:638(title)
msgid "Extend volumes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:639(para)
msgid "The IBM Storwize/SVC driver allows for extending a volume's size, but only for volumes without snapshots."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:645(para)
msgid "Snapshots are implemented using FlashCopy with no background copy (space-efficient). Volume clones (volumes created from existing volumes) are implemented with FlashCopy, but with background copy enabled. This means that volume clones are independent, full copies. While this background copy is taking place, attempting to delete or extend the source volume will result in that operation waiting for the copy to complete."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:656(title)
msgid "Volume retype"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:657(para)
msgid "The IBM Storwize/SVC driver enables you to modify volume types. When you modify volume types, you can also change these extra specs properties:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:684(para)
msgid "When you change the <literal>rsize</literal>, <literal>grainsize</literal> or <literal>compression</literal> properties, volume copies are asynchronously synchronized on the array."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml:691(para)
msgid "To change the <literal>iogrp</literal> property, IBM Storwize/SVC firmware version 6.4.0 or later is required."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:5(title)
msgid "NetApp unified driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:6(para)
msgid "The NetApp unified driver is a block storage driver that supports multiple storage families and protocols. A storage family corresponds to storage systems built on different NetApp technologies such as clustered Data ONTAP, Data ONTAP operating in 7-Mode, and E-Series. The storage protocol refers to the protocol used to initiate data storage and access operations on those storage systems like iSCSI and NFS. The NetApp unified driver can be configured to provision and manage OpenStack volumes on a given storage family using a specified storage protocol. The OpenStack volumes can then be used for accessing and storing data using the storage protocol on the storage family system. The NetApp unified driver is an extensible interface that can support new storage families and protocols."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:21(para)
msgid "With the Juno release of OpenStack, OpenStack Block Storage has introduced the concept of \"storage pools\", in which a single OpenStack Block Storage back end may present one or more logical storage resource pools from which OpenStack Block Storage will select as a storage location when provisioning volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:26(para)
msgid "In releases prior to Juno, the NetApp unified driver contained some \"scheduling\" logic that determined which NetApp storage container (namely, a FlexVol volume for Data ONTAP, or a dynamic disk pool for E-Series) that a new OpenStack Block Storage volume would be placed into."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:31(para)
msgid "With the introduction of pools, all scheduling logic is performed completely within the OpenStack Block Storage scheduler, as each NetApp storage container is directly exposed to the OpenStack Block Storage scheduler as a storage pool; whereas previously, the NetApp unified driver presented an aggregated view to the scheduler and made a final placement decision as to which NetApp storage container the OpenStack Block Storage volume would be provisioned into."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:41(title)
msgid "NetApp clustered Data ONTAP storage family"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:42(para)
msgid "The NetApp clustered Data ONTAP storage family represents a configuration group which provides OpenStack compute instances access to clustered Data ONTAP storage systems. At present it can be configured in OpenStack Block Storage to work with iSCSI and NFS storage protocols."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:48(title)
msgid "NetApp iSCSI configuration for clustered Data ONTAP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:50(para)
msgid "The NetApp iSCSI configuration for clustered Data ONTAP is an interface from OpenStack to clustered Data ONTAP storage systems for provisioning and managing the SAN block storage entity; that is, a NetApp LUN which can be accessed using the iSCSI protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:55(para)
msgid "The iSCSI configuration for clustered Data ONTAP is a direct interface from OpenStack Block Storage to the clustered Data ONTAP instance and as such does not require additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:62(title)
msgid "Configuration options for clustered Data ONTAP family with iSCSI protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:64(para)
msgid "Configure the volume driver, storage family and storage protocol to the NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by setting the <option>volume_driver</option>, <option>netapp_storage_family</option> and <option>netapp_storage_protocol</option> options in <filename>cinder.conf</filename> as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:75(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:139(replaceable)
msgid "openstack-vserver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:76(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:140(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:348(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:405(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:484(replaceable)
msgid "myhostname"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:77(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:141(replaceable)
msgid "port"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:81(para) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:353(para) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:501(para)
msgid "To use the iSCSI protocol, you must override the default value of <option>netapp_storage_protocol</option> with <literal>iscsi</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:91(para) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:155(para)
msgid "If you specify an account in the <option>netapp_login</option> that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:101(para) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:274(para) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:363(para) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:420(para) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:511(para)
msgid "For more information on these options and other deployment and operational scenarios, visit the <link href=\"http://netapp.github.io/openstack-deploy-ops-guide/\"> NetApp OpenStack Deployment and Operations Guide</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:111(title)
msgid "NetApp NFS configuration for clustered Data ONTAP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:113(para)
msgid "The NetApp NFS configuration for clustered Data ONTAP is an interface from OpenStack to a clustered Data ONTAP system for provisioning and managing OpenStack volumes on NFS exports provided by the clustered Data ONTAP system that are accessed using the NFS protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:119(para)
msgid "The NFS configuration for clustered Data ONTAP is a direct interface from OpenStack Block Storage to the clustered Data ONTAP instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:126(title)
msgid "Configuration options for the clustered Data ONTAP family with NFS protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:128(para)
msgid "Configure the volume driver, storage family, and storage protocol to NetApp unified driver, clustered Data ONTAP, and NFS respectively by setting the <option>volume_driver</option>, <option>netapp_storage_family</option> and <option>netapp_storage_protocol</option> options in <filename>cinder.conf</filename> as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:144(replaceable) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:409(replaceable)
msgid "/etc/cinder/nfs_shares"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:148(para)
msgid "Additional NetApp NFS configuration options are shared with the generic NFS driver. These options can be found here: <xref linkend=\"config_table_cinder_storage_nfs\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:166(title)
msgid "NetApp NFS Copy Offload client"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:167(para)
msgid "A feature was added in the Icehouse release of the NetApp unified driver that enables Image Service images to be efficiently copied to a destination Block Storage volume. When the Block Storage and Image Service are configured to use the NetApp NFS Copy Offload client, a controller-side copy will be attempted before reverting to downloading the image from the Image Service. This improves image provisioning times while reducing the consumption of bandwidth and CPU cycles on the host(s) running the Image and Block Storage services. This is due to the copy operation being performed completely within the storage cluster."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:176(para)
msgid "The NetApp NFS Copy Offload client can be used in either of the following scenarios:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:180(para)
msgid "The Image Service is configured to store images in an NFS share that is exported from a NetApp FlexVol volume <emphasis>and</emphasis> the destination for the new Block Storage volume will be on an NFS share exported from a different FlexVol volume than the one used by the Image Service. Both FlexVols must be located within the same cluster."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:187(para)
msgid "The source image from the Image Service has already been cached in an NFS image cache within a Block Storage backend. The cached image resides on a different FlexVol volume than the destination for the new Block Storage volume. Both FlexVols must be located within the same cluster."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:193(para)
msgid "To use this feature, you must configure the Image Service, as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:197(para)
msgid "Set the <option>default_store</option> configuration option to <literal>file</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:202(para)
msgid "Set the <option>filesystem_store_datadir</option> configuration option to the path to the Image Service NFS export."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:207(para)
msgid "Set the <option>show_image_direct_url</option> configuration option to <literal>True</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:212(para)
msgid "Set the <option>show_multiple_locations</option> configuration option to <literal>True</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:217(para)
msgid "Set the <option>filesystem_store_metadata_file</option> configuration option to a metadata file. The metadata file should contain a JSON object that contains the correct information about the NFS export used by the Image Service, similar to:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:225(replaceable)
msgid "nfs://192.168.0.1/myGlanceExport"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:226(replaceable)
msgid "/var/lib/glance/images"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:231(para)
msgid "To use this feature, you must configure the Block Storage service, as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:235(para)
msgid "Set the <option>netapp_copyoffload_tool_path</option> configuration option to the path to the NetApp Copy Offload binary."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:240(para)
msgid "Set the <option>glance_api_version</option> configuration option to <literal>2</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:246(para)
msgid "This feature requires that:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:249(para)
msgid "The storage system must have Data ONTAP v8.2 or greater installed."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:253(para)
msgid "The vStorage feature must be enabled on each storage virtual machine (SVM, also known as a Vserver) that is permitted to interact with the copy offload client."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:260(para)
msgid "To configure the copy offload workflow, enable NFS v4.0 or greater and export it from the SVM."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:267(para)
msgid "To download the NetApp copy offload binary to be utilized in conjunction with the <option>netapp_copyoffload_tool_path</option> configuration option, please visit the Utility Toolchest page at the <link href=\"http://mysupport.netapp.com/NOW/download/tools/ntap_openstack_nfs/\"> NetApp Support portal</link> (login is required)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:284(title)
msgid "NetApp-supported extra specs for clustered Data ONTAP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:286(para)
msgid "Extra specs enable vendors to specify extra filter criteria that the Block Storage scheduler uses when it determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with a clustered Data ONTAP storage system, you can leverage extra specs with OpenStack Block Storage volume types to ensure that OpenStack Block Storage volumes are created on storage back ends that have certain properties. For example, when you configure QoS, mirroring, or compression for a storage back end."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:296(para)
msgid "Extra specs are associated with OpenStack Block Storage volume types, so that when users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. For example, the back ends have the available space or extra specs. You can use the specs in the following table when you define OpenStack Block Storage volume types by using the <placeholder-1/> command."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:309(title)
msgid "NetApp Data ONTAP operating in 7-Mode storage family"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:311(para)
msgid "The NetApp Data ONTAP operating in 7-Mode storage family represents a configuration group which provides OpenStack compute instances access to 7-Mode storage systems. At present it can be configured in OpenStack Block Storage to work with iSCSI and NFS storage protocols."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:317(title)
msgid "NetApp iSCSI configuration for Data ONTAP operating in 7-Mode"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:319(para)
msgid "The NetApp iSCSI configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage systems for provisioning and managing the SAN block storage entity, that is, a LUN which can be accessed using iSCSI protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:325(para)
msgid "The iSCSI configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack to Data ONTAP operating in 7-Mode storage system and it does not require additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:333(title)
msgid "Configuration options for the Data ONTAP operating in 7-Mode storage family with iSCSI protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:336(para)
msgid "Configure the volume driver, storage family and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and iSCSI respectively by setting the <option>volume_driver</option>, <option>netapp_storage_family</option> and <option>netapp_storage_protocol</option> options in <filename>cinder.conf</filename> as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:373(title)
msgid "NetApp NFS configuration for Data ONTAP operating in 7-Mode"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:375(para)
msgid "The NetApp NFS configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage system for provisioning and managing OpenStack volumes on NFS exports provided by the Data ONTAP operating in 7-Mode storage system which can then be accessed using NFS protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:382(para)
msgid "The NFS configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack Block Storage to the Data ONTAP operating in 7-Mode instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:390(title)
msgid "Configuration options for the Data ONTAP operating in 7-Mode family with NFS protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:393(para)
msgid "Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and NFS respectively by setting the <option>volume_driver</option>, <option>netapp_storage_family</option> and <option>netapp_storage_protocol</option> options in <filename>cinder.conf</filename> as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:413(para)
msgid "Additional NetApp NFS configuration options are shared with the generic NFS driver. For a description of these, see <xref linkend=\"config_table_cinder_storage_nfs\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:431(title)
msgid "NetApp E-Series storage family"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:432(para)
msgid "The NetApp E-Series storage family represents a configuration group which provides OpenStack compute instances access to E-Series storage systems. At present it can be configured in OpenStack Block Storage to work with the iSCSI storage protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:438(title)
msgid "NetApp iSCSI configuration for E-Series"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:439(para)
msgid "The NetApp iSCSI configuration for E-Series is an interface from OpenStack to E-Series storage systems for provisioning and managing the SAN block storage entity; that is, a NetApp LUN which can be accessed using the iSCSI protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:444(para)
msgid "The iSCSI configuration for E-Series is an interface from OpenStack Block Storage to the E-Series proxy instance and as such requires the deployment of the proxy instance in order to achieve the desired functionality. The driver uses REST APIs to interact with the E-Series proxy instance, which in turn interacts directly with the E-Series controllers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:450(para)
msgid "The use of multipath and DM-MP are required when using the OpenStack Block Storage driver for E-Series. In order for OpenStack Block Storage and OpenStack Compute to take advantage of multiple paths, the following configuration options must be correctly configured:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:457(para)
msgid "The <option>use_multipath_for_image_xfer</option> option should be set to <literal>True</literal> in the <filename>cinder.conf</filename> file within the driver-specific stanza (for example, <literal>[<replaceable>myDriver</replaceable>]</literal>)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:464(para)
msgid "The <option>iscsi_use_multipath</option> option should be set to <literal>True</literal> in the <filename>nova.conf</filename> file within the <literal>[libvirt]</literal> stanza."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:471(title)
msgid "Configuration options for E-Series storage family with iSCSI protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:473(para)
msgid "Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, E-Series, and iSCSI respectively by setting the <option>volume_driver</option>, <option>netapp_storage_family</option> and <option>netapp_storage_protocol</option> options in <filename>cinder.conf</filename> as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:488(replaceable)
msgid "1.2.3.4,5.6.7.8"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:489(replaceable)
msgid "arrayPassword"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:490(replaceable)
msgid "pool1,pool2"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:493(para)
msgid "To use the E-Series driver, you must override the default value of <option>netapp_storage_family</option> with <literal>eseries</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:522(title)
msgid "Upgrading prior NetApp drivers to the NetApp unified driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:524(para)
msgid "NetApp introduced a new unified block storage driver in Havana for configuring different storage families and storage protocols. This requires defining upgrade path for NetApp drivers which existed in releases prior to Havana. This section covers the upgrade configuration for NetApp drivers to the new unified configuration and a list of deprecated NetApp drivers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:532(title)
msgid "Upgraded NetApp drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:533(para)
msgid "This section describes how to update OpenStack Block Storage configuration from a pre-Havana release to the unified driver format."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:537(title)
msgid "Driver upgrade configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:540(para)
msgid "NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:543(para) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:553(para)
msgid "NetApp unified driver configuration."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:550(para)
msgid "NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or earlier)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:560(para)
msgid "NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:564(para) ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:575(para)
msgid "NetApp unified driver configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:571(para)
msgid "NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:586(title)
msgid "Deprecated NetApp drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:587(para)
msgid "This section lists the NetApp drivers in earlier releases that are deprecated in Havana."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:591(para)
msgid "NetApp iSCSI driver for clustered Data ONTAP."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:596(para)
msgid "NetApp NFS driver for clustered Data ONTAP."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:601(para)
msgid "NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage controller."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:606(para)
msgid "NetApp NFS driver for Data ONTAP operating in 7-Mode storage controller."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml:612(para)
msgid "For support information on deprecated NetApp drivers in the Havana release, visit the <link href=\"http://netapp.github.io/openstack-deploy-ops-guide/\"> NetApp OpenStack Deployment and Operations Guide</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/lvm-volume-driver.xml:6(title)
msgid "LVM"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/lvm-volume-driver.xml:7(para)
msgid "The default volume back-end uses local volumes managed by LVM."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/lvm-volume-driver.xml:8(para)
msgid "This driver supports different transport protocols to attach volumes, currently iSCSI and iSER."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/lvm-volume-driver.xml:10(para)
msgid "Set the following in your <filename>cinder.conf</filename>, and use the following options to configure for iSCSI transport:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/lvm-volume-driver.xml:16(para)
msgid "and for the iSER transport:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:4(title)
msgid "XenAPI Storage Manager volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:5(para)
msgid "The Xen Storage Manager volume driver (xensm) is a XenAPI hypervisor specific volume driver, and can be used to provide basic storage functionality, including volume creation and destruction, on a number of different storage back-ends. It also enables the capability of using more sophisticated storage back-ends for operations like cloning/snapshots, and so on. Some of the storage plug-ins that are already supported in Citrix XenServer and Xen Cloud Platform (XCP) are:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:15(para)
msgid "NFS VHD: Storage repository (SR) plug-in that stores disks as Virtual Hard Disk (VHD) files on a remote Network File System (NFS)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:20(para)
msgid "Local VHD on LVM: SR plug-in that represents disks as VHD disks on Logical Volumes (LVM) within a locally-attached Volume Group."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:25(para)
msgid "HBA LUN-per-VDI driver: SR plug-in that represents Logical Units (LUs) as Virtual Disk Images (VDIs) sourced by host bus adapters (HBAs). For example, hardware-based iSCSI or FC support."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:31(para)
msgid "NetApp: SR driver for mapping of LUNs to VDIs on a NETAPP server, providing use of fast snapshot and clone features on the filer."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:36(para)
msgid "LVHD over FC: SR plug-in that represents disks as VHDs on Logical Volumes within a Volume Group created on an HBA LUN. For example, hardware-based iSCSI or FC support."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:42(para)
msgid "iSCSI: Base ISCSI SR driver, provides a LUN-per-VDI. Does not support creation of VDIs but accesses existing LUNs on a target."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:47(para)
msgid "LVHD over iSCSI: SR plug-in that represents disks as Logical Volumes within a Volume Group created on an iSCSI LUN."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:52(para)
msgid "EqualLogic: SR driver for mapping of LUNs to VDIs on a EQUALLOGIC array group, providing use of fast snapshot and clone features on the array."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:58(title)
msgid "Design and operation"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:60(title)
msgid "Definitions"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:63(para)
msgid "<emphasis role=\"bold\">Back-end:</emphasis> A term for a particular storage back-end. This could be iSCSI, NFS, NetApp, and so on."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:68(para)
msgid "<emphasis role=\"bold\">Back-end-config:</emphasis> All the parameters required to connect to a specific back-end. For example, for NFS, this would be the server, path, and so on."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:75(para)
msgid "<emphasis role=\"bold\">Flavor:</emphasis> This term is equivalent to volume \"types\". A user friendly term to specify some notion of quality of service. For example, \"gold\" might mean that the volumes use a back-end where backups are possible. A flavor can be associated with multiple back-ends. The volume scheduler, with the help of the driver, decides which back-end is used to create a volume of a particular flavor. Currently, the driver uses a simple \"first-fit\" policy, where the first back-end that can successfully create this volume is the one that is used."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:93(title)
msgid "Operation"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:94(para)
msgid "The admin uses the nova-manage command detailed below to add flavors and back-ends."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:96(para)
msgid "One or more <systemitem class=\"service\">cinder-volume</systemitem> service instances are deployed for each availability zone. When an instance is started, it creates storage repositories (SRs) to connect to the back-ends available within that zone. All <systemitem class=\"service\">cinder-volume</systemitem> instances within a zone can see all the available back-ends. These instances are completely symmetric and hence should be able to service any <literal>create_volume</literal> request within the zone."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:108(title)
msgid "On XenServer, PV guests required"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:109(para)
msgid "Note that when using XenServer you can only attach a volume to a PV guest."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:115(title)
msgid "Configure XenAPI Storage Manager"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:117(title) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:331(title)
msgid "Prerequisites"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:120(para)
msgid "xensm requires that you use either Citrix XenServer or XCP as the hypervisor. The NetApp and EqualLogic back-ends are not supported on XCP."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:126(para)
msgid "Ensure all <emphasis role=\"bold\">hosts</emphasis> running volume and Compute services have connectivity to the storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:141(systemitem) ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:181(systemitem) ./doc/config-reference/compute/section_nova-log-files.xml:66(systemitem)
msgid "nova-compute"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:138(emphasis)
msgid "Set the following configuration options for the nova volume service: (<placeholder-1/> also requires the volume_driver configuration option.)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:152(emphasis)
msgid "You must create the back-end configurations that the volume driver uses before you start the volume service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:161(para)
msgid "SR type and configuration connection parameters are in keeping with the <link href=\"http://support.citrix.com/article/CTX124887\">XenAPI Command Line Interface</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:168(para)
msgid "Example: For the NFS storage manager plug-in, run these commands:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:177(emphasis)
msgid "Start <placeholder-1/> and <placeholder-2/> with the new configuration options."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:188(title)
msgid "Create and access the volumes from VMs"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:189(para)
msgid "Currently, the flavors have not been tied to the volume types API. As a result, we simply end up creating volumes in a \"first fit\" order on the given back-ends."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml:193(para)
msgid "Use the standard <placeholder-1/> or OpenStack API commands (such as volume extensions) to create, destroy, attach, or detach volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-msa-driver.xml:4(title)
msgid "HP MSA Fibre Channel driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-msa-driver.xml:5(para)
msgid "The HP MSA fiber channel driver runs volume operations on the storage array over HTTP."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-msa-driver.xml:7(para)
msgid "A VDisk must be created on the HP MSA array first. This can be done using the web interface or the command-line interface of the array."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-msa-driver.xml:9(para)
msgid "The following options must be defined in the <systemitem>cinder-volume</systemitem> configuration file (<filename>/etc/cinder/cinder.conf</filename>):"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-msa-driver.xml:14(para)
msgid "Set the <option>volume_driver</option> option to <literal>cinder.volume.drivers.san.hp.hp_msa_fc.HPMSAFCDriver</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-msa-driver.xml:19(para)
msgid "Set the <option>san_ip</option> option to the hostname or IP address of your HP MSA array."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-msa-driver.xml:23(para)
msgid "Set the <option>san_login</option> option to the login of an existing user of the HP MSA array."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-msa-driver.xml:28(para)
msgid "Set the <option>san_password</option> option to the password for this user."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:6(title)
msgid "Dell EqualLogic volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:7(para)
msgid "The Dell EqualLogic volume driver interacts with configured EqualLogic arrays and supports various operations."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:21(para)
msgid "The OpenStack Block Storage service supports:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:23(para)
msgid "Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools and multiple pools on a single array."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:26(para)
msgid "Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools or multiple pools on a single array."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:30(para)
msgid "The Dell EqualLogic volume driver's ability to access the EqualLogic Group is dependent upon the generic block storage driver's SSH settings in the <filename>/etc/cinder/cinder.conf</filename> file (see <xref linkend=\"section_block-storage-sample-configuration-files\"/> for reference)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:35(para)
msgid "The following sample <filename>/etc/cinder/cinder.conf</filename> configuration lists the relevant settings for a typical Block Storage service using a single Dell EqualLogic Group:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:38(title)
msgid "Default (single-instance) configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:43(replaceable) ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:66(term)
msgid "IP_EQLX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:44(replaceable) ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:73(term)
msgid "SAN_UNAME"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:45(replaceable) ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:80(term)
msgid "SAN_PW"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:46(replaceable) ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:88(term)
msgid "EQLX_GROUP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:47(replaceable) ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:95(term)
msgid "EQLX_POOL"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:51(replaceable) ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:52(replaceable)
msgid "true|false"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:53(replaceable) ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:104(term)
msgid "EQLX_UNAME"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:54(replaceable) ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:112(term)
msgid "EQLX_PW"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:59(replaceable)
msgid "SAN_KEY_PATH"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:63(para)
msgid "In this example, replace the following variables accordingly:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:68(para)
msgid "The IP address used to reach the Dell EqualLogic Group through SSH. This field has no default value."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:75(para)
msgid "The user name to login to the Group manager via SSH at the <parameter>san_ip</parameter>. Default user name is <literal>grpadmin</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:82(para)
msgid "The corresponding password of <replaceable>SAN_UNAME</replaceable>. Not used when <parameter>san_private_key</parameter> is set. Default password is <literal>password</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:90(para)
msgid "The group to be used for a pool where the Block Storage service will create volumes and snapshots. Default group is <literal>group-0</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:97(para)
msgid "The pool where the Block Storage service will create volumes and snapshots. Default pool is <literal>default</literal>. This option cannot be used for multiple pools utilized by the Block Storage service on a single Dell EqualLogic Group."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:106(para)
msgid "The CHAP login account for each volume in a pool, if <parameter>eqlx_use_chap</parameter> is set to <literal>true</literal>. Default account name is <literal>chapadmin</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:114(para)
msgid "The corresponding password of <replaceable>EQLX_UNAME</replaceable>. The default password is randomly generated in hexadecimal, so you must set this password manually."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:120(term)
msgid "SAN_KEY_PATH (optional)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml:122(para)
msgid "The filename of the private key used for SSH authentication. This provides password-less login to the EqualLogic Group. Not used when <parameter>san_password</parameter> is set. There is no default value."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:7(title)
msgid "EMC XtremIO OpenStack Block Storage driver guide"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:9(para)
msgid "The high performance XtremIO All Flash Array (<acronym>AFA</acronym>) offers Block Storage services to OpenStack. Using the driver, OpenStack Block Storage hosts can connect to an XtermIO Storage cluster."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:11(para)
msgid "This section explains how to configure and connect an OpenStack Block Storage host to an XtremIO Storage Cluster"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:13(title)
msgid "Support matrix"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:16(para)
msgid "Xtremapp: Version 3.0 and above"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:24(para)
msgid "Create, delete, clone, attach, and detach volumes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:27(para)
msgid "Create and delete volume snapshots"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:30(para)
msgid "Create a volume from a snapshot"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:33(para)
msgid "Copy an image to a volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:36(para)
msgid "Copy a volume to an image"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:39(para)
msgid "Extend a volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:44(title)
msgid "Driver installation and configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:45(para)
msgid "The following sections describe the installation and configuration of the EMC XtremIO OpenStack Block Storage driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:46(para)
msgid "The driver should be installed on the Block Storage host that has the cinder-volume component."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:49(title)
msgid "Installation"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:51(title)
msgid "To install the EMC XtremIO Block Storage driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:53(para)
msgid "Configure the XtremIO Block Storage driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:56(para)
msgid "Restart cinder."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:59(para)
msgid "When CHAP initiator authentication is required, set the Cluster CHAP authentication mode to initiator."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:64(title)
msgid "Configuring the XtremIO Block Storage driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:65(para)
msgid "Edit the <filename>cinder.conf</filename> file by adding the configuration below under the <literal>[DEFAULT]</literal> section of the file in case of a single back end or under a separate section in case of multiple back ends (for example [<acronym>XTREMIO</acronym>]). The configuration file is usually located under the following path <filename>/etc/cinder/cinder.conf</filename>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:69(para)
msgid "For a configuration example, refer to the configuration <link linkend=\"emc-xtremio-configuration-example\">example</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:72(title)
msgid "XtremIO driver name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:73(para)
msgid "Configure the driver name by adding the following parameter:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:76(para)
msgid "For iSCSI <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:79(para)
msgid "For Fibre Channel <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:84(title)
msgid "XtremIO management IP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:85(para)
msgid "To retrieve the management IP, use the<placeholder-1/>CLI command."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:87(replaceable)
msgid "XMS Management IP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:86(para)
msgid "Configure the management IP by adding the following parameter <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:90(title)
msgid "XtremIO user credentials"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:91(para)
msgid "OpenStack Block Storage requires an XtremIO XMS user with administrative privileges. XtremIO recommends creating a dedicated OpenStack user account that holds an administrative user role."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:93(para)
msgid "Refer to the <citetitle>XtremIO User Guide</citetitle> for details on user account management"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:94(para)
msgid "Create an XMS account using either the XMS GUI or the <placeholder-1/>CLI command."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:95(para)
msgid "Configure the user credentials by adding the following parameters:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:96(replaceable)
msgid "XMS username"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:96(code)
msgid "san_login = <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:97(replaceable)
msgid "XMS username password"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:97(code)
msgid "san_password = <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:100(title)
msgid "Multiple back ends"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:101(para)
msgid "Configuring multiple storage back ends enables you to create several back-end storage solutions that serve the same OpenStack Compute resources."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:102(para)
msgid "When a volume is created, the scheduler selects the appropriate back end to handle the request, according to the specified volume type."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:105(title)
msgid "To enable multiple storage back ends:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:107(para)
msgid "Add the back end name to the XtremIO configuration group section as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:108(replaceable)
msgid "XtremIO back end name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:108(code) ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:125(code)
msgid "volume_backend_name = <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:111(para)
msgid "Add the configuration group name to the <code>enabled_backends</code>flag in the <literal>[DEFAULT]</literal> section of the <filename>cinder.conf</filename>file. This flag defines the names (separated by commas) of the configuration groups for different back ends. Each name is associated to one configuration group for a back end:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:115(replaceable)
msgid "back end name1, back end name2"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:115(code)
msgid "enabled_backends = <placeholder-1/>..."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:118(para)
msgid "Define a volume type (for example<code>gold</code>) as Block Storage by running the following command:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:122(para)
msgid "Create an extra-specification (for example XtremIOAFA) to link the volume type you defined to a back end name, by running the following command:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:125(replaceable) ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:179(replaceable)
msgid "XtremIOAFA"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:128(para)
msgid "When you create a volume (for example Vol1), specify the volume type. The volume type extra-specifications are used to determine the relevant back end."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:135(title)
msgid "Setting thin provisioning and multipathing parameters"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:136(para)
msgid "To support thin provisioning and multipathing in the XtremIO Array, the following parameters from the Nova and Cinder configuration files should be modified as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:139(para)
msgid "Thin Provisioning"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:140(para)
msgid "The <code>use_cow_images</code> parameter in the<filename>nova.conf</filename>file should be set to False as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:142(code)
msgid "use_cow_images = false"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:144(para)
msgid "Multipathing"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:145(para)
msgid "The <code>use_multipath_for_image_xfer</code> parameter in the<filename>cinder.conf</filename> file should be set to True as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:147(code)
msgid "use_multipath_for_image_xfer = true"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:152(title)
msgid "Restarting OpenStack Block Storage"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:153(para)
msgid "Save the<filename>cinder.conf</filename>file and restart cinder by running the following command:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:157(title)
msgid "Configuring CHAP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:158(para)
msgid "The XtremIO Block Storage driver supports CHAP initiator authentication. If CHAP initiator authentication is required, set the CHAP Authentication mode to initiator."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:160(para)
msgid "To set the CHAP initiator mode using CLI, run the following CLI command:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:162(para)
msgid "The CHAP initiator mode can also be set via the XMS GUI"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:163(para)
msgid "Refer to <citetitle>XtremIO User Guide</citetitle> for details on CHAP configuration via GUI and CLI."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:164(para)
msgid "The CHAP initiator authentication credentials (username and password) are generated automatically by the Block Storage driver. Therefore, there is no need to configure the initial CHAP credentials manually in XMS."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:169(title)
msgid "Configuration example"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:170(subtitle)
msgid "cinder.conf example file"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:171(para)
msgid "You can update the<filename>cinder.conf</filename>file by editing the necessary parameters as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:172(literal)
msgid "[Default]"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:176(replaceable)
msgid "10.10.10.20"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:177(replaceable)
msgid "admin"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-xtremio-driver.xml:178(replaceable)
msgid "223344"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:299(None)
msgid "@@image: '../../../common/figures/coraid/Repository_Creation_Plan_screen.png'; md5=83038804978648c2db4001a46c11f8ba"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:10(title)
msgid "Coraid AoE driver configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:11(para)
msgid "Coraid storage appliances can provide block-level storage to OpenStack instances. Coraid storage appliances use the low-latency ATA-over-Ethernet (ATA) protocol to provide high-bandwidth data transfer between hosts and data on the network."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:39(para)
msgid "This document describes how to configure the OpenStack Block Storage service for use with Coraid storage appliances."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:42(title) ./doc/config-reference/compute/section_introduction-to-xen.xml:14(title)
msgid "Terminology"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:43(para)
msgid "These terms are used in this section:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:47(th)
msgid "Term"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:48(th)
msgid "Definition"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:53(td)
msgid "AoE"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:54(td)
msgid "ATA-over-Ethernet protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:57(td)
msgid "EtherCloud Storage Manager (ESM)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:58(td)
msgid "ESM provides live monitoring and management of EtherDrive appliances that use the AoE protocol, such as the SRX and VSX."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:63(td)
msgid "Fully-Qualified Repository Name (FQRN)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:66(replaceable)
msgid "performance_class"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:66(replaceable)
msgid "availability_class"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:66(replaceable)
msgid "profile_name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:66(replaceable)
msgid "repository_name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:64(td)
msgid "The FQRN is the full identifier of a storage profile. FQRN syntax is: <placeholder-1/><placeholder-2/><placeholder-3/><placeholder-4/><placeholder-5/><placeholder-6/><placeholder-7/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:69(td)
msgid "SAN"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:70(td)
msgid "Storage Area Network"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:73(td)
msgid "SRX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:74(td)
msgid "Coraid EtherDrive SRX block storage appliance"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:77(td)
msgid "VSX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:78(td)
msgid "Coraid EtherDrive VSX storage virtualization appliance"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:86(para)
msgid "To support the OpenStack Block Storage service, your SAN must include an SRX for physical storage, a VSX running at least CorOS v2.0.6 for snapshot support, and an ESM running at least v2.1.1 for storage repository orchestration. Ensure that all storage appliances are installed and connected to your network before you configure OpenStack volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:92(para)
msgid "In order for the node to communicate with the SAN, you must install the Coraid AoE Linux driver on each Compute node on the network that runs an OpenStack instance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:97(title)
msgid "Overview"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:98(para)
msgid "To configure the OpenStack Block Storage for use with Coraid storage appliances, perform the following procedures:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:102(para)
msgid "<link linkend=\"coraid_installing_aoe_driver\">Download and install the Coraid Linux AoE driver</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:106(para)
msgid "<link linkend=\"coraid_creating_storage_profile\">Create a storage profile by using the Coraid ESM GUI</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:110(para)
msgid "<link linkend=\"coraid_creating_storage_repository\">Create a storage repository by using the ESM GUI and record the FQRN</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:115(para)
msgid "<link linkend=\"coraid_configuring_cinder.conf\">Configure the <filename>cinder.conf</filename> file</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:119(para)
msgid "<link linkend=\"coraid_creating_associating_volume_type\">Create and associate a block storage volume type</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:126(title)
msgid "Install the Coraid AoE driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:127(para)
msgid "Install the Coraid AoE driver on every compute node that will require access to block storage."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:129(para)
msgid "The latest AoE drivers will always be located at <link href=\"http://support.coraid.com/support/linux/\">http://support.coraid.com/support/linux/</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:132(para)
msgid "To download and install the AoE driver, follow the instructions below, replacing “aoeXXX” with the AoE driver file name:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:137(para)
msgid "Download the latest Coraid AoE driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:143(para)
msgid "Unpack the AoE driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:146(para)
msgid "Install the AoE driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:152(para)
msgid "Initialize the AoE driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:158(para)
msgid "Optionally, specify the Ethernet interfaces that the node can use to communicate with the SAN."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:160(para)
msgid "The AoE driver may use every Ethernet interface available to the node unless limited with the <literal>aoe_iflist</literal> parameter. For more information about the <literal>aoe_iflist</literal> parameter, see the <filename>aoe readme</filename> file included with the AoE driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:167(replaceable)
msgid "eth1 eth2 ..."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:173(title)
msgid "Create a storage profile"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:174(para)
msgid "To create a storage profile using the ESM GUI:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:177(para)
msgid "Log in to the ESM."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:180(para)
msgid "Click <guibutton>Storage Profiles</guibutton> in the <guilabel>SAN Domain</guilabel> pane."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:184(para)
msgid "Choose <guimenuitem>Menu &gt; Create Storage Profile</guimenuitem>. If the option is unavailable, you might not have appropriate permissions. Make sure you are logged in to the ESM as the SAN administrator."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:190(para)
msgid "Use the storage class selector to select a storage class."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:192(para)
msgid "Each storage class includes performance and availability criteria (see the Storage Classes topic in the ESM Online Help for information on the different options)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:197(para)
msgid "Select a RAID type (if more than one is available) for the selected profile type."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:201(para)
msgid "Type a <guilabel>Storage Profile</guilabel> name."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:202(para)
msgid "The name is restricted to alphanumeric characters, underscore (_), and hyphen (-), and cannot exceed 32 characters."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:207(para)
msgid "Select the drive size from the drop-down menu."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:210(para)
msgid "Select the number of drives to be initialized for each RAID (LUN) from the drop-down menu (if the selected RAID type requires multiple drives)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:215(para)
msgid "Type the number of RAID sets (LUNs) you want to create in the repository by using this profile."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:219(para)
msgid "Click <guibutton>Next</guibutton>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:224(title)
msgid "Create a storage repository and get the FQRN"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:225(para)
msgid "Create a storage repository and get its fully qualified repository name (FQRN):"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:229(para)
msgid "Access the <guilabel>Create Storage Repository</guilabel> dialog box."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:233(para)
msgid "Type a Storage Repository name."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:234(para)
msgid "The name is restricted to alphanumeric characters, underscore (_), hyphen (-), and cannot exceed 32 characters."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:239(para)
msgid "Click <guibutton>Limited</guibutton> or <guibutton>Unlimited</guibutton> to indicate the maximum repository size."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:242(para)
msgid "<guibutton>Limited</guibutton> sets the amount of space that can be allocated to the repository. Specify the size in TB, GB, or MB."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:245(para)
msgid "When the difference between the reserved space and the space already allocated to LUNs is less than is required by a LUN allocation request, the reserved space is increased until the repository limit is reached."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:250(para)
msgid "The reserved space does not include space used for parity or space used for mirrors. If parity and/or mirrors are required, the actual space allocated to the repository from the SAN is greater than that specified in reserved space."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:256(para)
msgid "<emphasis role=\"bold\">Unlimited</emphasis>Unlimited means that the amount of space allocated to the repository is unlimited and additional space is allocated to the repository automatically when space is required and available."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:262(para)
msgid "Drives specified in the associated Storage Profile must be available on the SAN in order to allocate additional resources."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:268(para)
msgid "Check the <guibutton>Resizeable LUN</guibutton> box."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:270(para)
msgid "This is required for OpenStack volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:272(para)
msgid "If the Storage Profile associated with the repository has platinum availability, the Resizeable LUN box is automatically checked."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:278(para)
msgid "Check the <guibutton>Show Allocation Plan API calls</guibutton> box. Click <guibutton>Next</guibutton>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:283(para)
msgid "Record the FQRN and click <guibutton>Finish</guibutton>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:285(para)
msgid "The FQRN is located in the first line of output following the <literal>Plan</literal> keyword in the <guilabel>Repository Creation Plan</guilabel> window. The FQRN syntax is <replaceable>performance_class</replaceable><placeholder-1/><replaceable>availability_class</replaceable><placeholder-2/><replaceable>profile_name</replaceable><placeholder-3/><replaceable>repository_name</replaceable>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:290(para)
msgid "In this example, the FQRN is <literal>Bronze-Platinum:BP1000:OSTest</literal>, and is highlighted."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:294(title)
msgid "Repository Creation Plan screen"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:303(para)
msgid "Record the FQRN; it is a required parameter later in the configuration procedure."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:309(title)
msgid "Configure options in the cinder.conf file"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:310(para)
msgid "Edit or add the following lines to the file<filename> /etc/cinder/cinder.conf</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:313(replaceable)
msgid "ESM_IP_address"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:315(replaceable)
msgid "Access_Control_Group_name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:317(replaceable) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:365(replaceable) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:383(replaceable)
msgid "coraid_repository_key"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:319(para)
msgid "Access to storage devices and storage repositories can be controlled using Access Control Groups configured in ESM. Configuring <filename>cinder.conf</filename> to log on to ESM as the SAN administrator (user name <literal>admin</literal>), will grant full access to the devices and repositories configured in ESM."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:325(para)
msgid "Optionally, you can configure an ESM Access Control Group and user. Then, use the <filename>cinder.conf</filename> file to configure access to the ESM through that group, and user limits access from the OpenStack instance to devices and storage repositories that are defined in the group."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:330(para)
msgid "To manage access to the SAN by using Access Control Groups, you must enable the Use Access Control setting in the <emphasis role=\"bold\">ESM System Setup</emphasis> &gt;<emphasis role=\"bold\"> Security</emphasis> screen."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:334(para)
msgid "For more information, see the ESM Online Help."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:337(title)
msgid "Create and associate a volume type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:338(para)
msgid "Create and associate a volume with the ESM storage repository."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:342(para)
msgid "Restart Cinder."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:348(para)
msgid "Create a volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:349(replaceable)
msgid "volume_type_name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:350(para)
msgid "where <replaceable>volume_type_name</replaceable> is the name you assign the volume. You will see output similar to the following:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:358(para)
msgid "Record the value in the ID field; you use this value in the next step."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:362(para)
msgid "Associate the volume type with the Storage Repository."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:365(replaceable) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:376(replaceable)
msgid "UUID"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:365(replaceable) ./doc/config-reference/block-storage/drivers/coraid-driver.xml:391(replaceable)
msgid "FQRN"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:370(th)
msgid "Variable"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:377(td)
msgid "The ID returned from the <placeholder-1/> command. You can use the <placeholder-2/> command to recover the ID."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:388(literal)
msgid "coraid_repository"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:384(td)
msgid "The key name used to associate the Cinder volume type with the ESM in the <placeholder-1/> file. If no key name was defined, this is default value for <placeholder-2/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml:392(td)
msgid "The FQRN recorded during the Create Storage Repository process."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:8(title)
msgid "EMC VMAX iSCSI and FC drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:9(para)
msgid "The EMC VMAX drivers, <literal>EMCVMAXISCSIDriver</literal> and <literal>EMCVMAXFCDriver</literal>, support the use of EMC VMAX storage arrays under OpenStack Block Storage. They both provide equivalent functions and differ only in support for their respective host attachment methods."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:14(para)
msgid "The drivers perform volume operations by communicating with the backend VMAX storage. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:17(para)
msgid "The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. It is a CIM server that enables CIM clients to perform CIM operations over HTTP by using SMI-S in the back-end for VMAX storage operations."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:21(para)
msgid "The EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. It supports the VMAX storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:26(para)
msgid "EMC SMI-S Provider V4.6.2.8 and higher is required. You can download SMI-S from the <link href=\"https://support.emc.com\">EMC's support </link> web site (login is required). See the EMC SMI-S Provider release notes for installation instructions."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:32(para)
msgid "EMC storage VMAX Family is supported."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:36(para)
msgid "VMAX drivers support these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:63(para)
msgid "VMAX drivers also support the following features:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:66(para)
msgid "FAST automated storage tiering policy."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:69(para)
msgid "Dynamic masking view creation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:72(para)
msgid "Striped volume creation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:78(title)
msgid "Set up the VMAX drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:80(title)
msgid "To set up the EMC VMAX drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:82(para)
msgid "Install the <package>python-pywbem</package> package for your distribution. See <xref linkend=\"install-pywbem\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:87(para)
msgid "Download SMI-S from PowerLink and install it. Add your VMAX arrays to SMI-S."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:89(para)
msgid "For information, see <xref linkend=\"setup-smi-s\"/> and the SMI-S release notes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:94(para)
msgid "Change configuration files. See <xref linkend=\"emc-config-file\"/> and <xref linkend=\"emc-config-file-2\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:99(para)
msgid "Configure connectivity. For FC driver, see <xref linkend=\"configuring-connectivity-fc\"/>. For iSCSI driver, see <xref linkend=\"configuring-connectivity-iscsi\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:126(title)
msgid "Set up SMI-S"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:127(para)
msgid "You can install SMI-S on a non-OpenStack host. Supported platforms include different flavors of Windows, Red Hat, and SUSE Linux. SMI-S can be installed on a physical server or a VM hosted by an ESX server. Note that the supported hypervisor for a VM running SMI-S is ESX only. See the EMC SMI-S Provider release notes for more information on supported platforms and installation instructions."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:137(para)
msgid "You must discover storage arrays on the SMI-S server before you can use the VMAX drivers. Follow instructions in the SMI-S release notes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:142(para)
msgid "SMI-S is usually installed at <filename>/opt/emc/ECIM/ECOM/bin</filename> on Linux and <filename>C:\\Program Files\\EMC\\ECIM\\ECOM\\bin</filename> on Windows. After you install and configure SMI-S, go to that directory and type <placeholder-1/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:149(para)
msgid "Use <placeholder-1/> in <placeholder-2/> to add an array. Use <placeholder-3/> and examine the output after the array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC VMAX drivers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:157(title)
msgid "<filename>cinder.conf</filename> configuration file"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:161(para)
msgid "Add the following entries, where <literal>10.10.61.45</literal> is the IP address of the VMAX iSCSI target:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:174(para)
msgid "In this example, two backend configuration groups are enabled: <literal>CONF_GROUP_ISCSI</literal> and <literal>CONF_GROUP_FC</literal>. Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the EMC-specific configuration file containing additional settings. Note that the file name is in the format <filename> /etc/cinder/cinder_emc_config_[confGroup].xml</filename>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:184(para)
msgid "Once the <filename>cinder.conf</filename> and EMC-specific configuration files have been created, cinder commands need to be issued in order to create and associate OpenStack volume types with the declared volume_backend_names:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:193(para)
msgid "By issuing these commands, the Block Storage volume type <literal>VMAX_ISCSI</literal> is associated with the ISCSI_backend, and the type <literal>VMAX_FC</literal> is associated with the FC_backend."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:197(para)
msgid "Restart the <systemitem class=\"service\"> cinder-volume</systemitem> service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:201(title)
msgid "<filename>cinder_emc_config_CONF_GROUP_ISCSI.xml </filename> configuration file"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:203(para)
msgid "Create the <filename> /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml </filename> file. You do not need to restart the service for this change."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:225(para)
msgid "<systemitem>EcomServerIp</systemitem> and <systemitem>EcomServerPort</systemitem> are the IP address and port number of the ECOM server which is packaged with SMI-S."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:231(para)
msgid "<systemitem>EcomUserName</systemitem> and <systemitem>EcomPassword</systemitem> are credentials for the ECOM server."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:236(para)
msgid "<systemitem>PortGroups</systemitem> supplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections. <systemitem>PortGroups</systemitem> can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from the <systemitem>PortGroup</systemitem> list, to evenly distribute load across the set of groups provided. Make sure that the <systemitem>PortGroups</systemitem> set contains either all FC or all iSCSI port groups (for a given backend), as appropriate for the configured driver (iSCSI or FC)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:258(para)
msgid "The <systemitem>Array</systemitem> tag holds the unique VMAX array serial number."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:262(para)
msgid "The <systemitem>Pool</systemitem> tag holds the unique pool name within a given array. For backends not using FAST automated tiering, the pool is a single pool that has been created by the administrator. For backends exposing FAST policy automated tiering, the pool is the bind pool to be used with the FAST policy."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:271(para)
msgid "The <systemitem>FastPolicy</systemitem> tag conveys the name of the FAST Policy to be used. By including this tag, volumes managed by this backend are treated as under FAST control. Omitting the <systemitem>FastPolicy</systemitem> tag means FAST is not enabled on the provided storage pool."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:282(title)
msgid "FC Zoning with VMAX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:283(para)
msgid "Zone Manager is recommended when using the VMAX FC driver, especially for larger configurations where pre-zoning would be too complex and open-zoning would raise security concerns."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:289(title)
msgid "iSCSI with VMAX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:292(para)
msgid "Make sure the <package>iscsi-initiator-utils </package> package is installed on the host (use apt-get, zypper, or yum, depending on Linux flavor)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:298(para)
msgid "Verify host is able to ping VMAX iSCSI target ports."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:305(title)
msgid "VMAX masking view and group naming info"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:307(title)
msgid "Masking view names"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:308(para)
msgid "Masking views are dynamically created by the VMAX FC and iSCSI drivers using the following naming conventions:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:315(title)
msgid "Initiator group names"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:316(para)
msgid "For each host that is attached to VMAX volumes using the drivers, an initiator group is created or re-used (per attachment type). All initiators of the appropriate type known for that host are included in the group. At each new attach volume operation, the VMAX driver retrieves the initiators (either WWNNs or IQNs) from OpenStack and adds or updates the contents of the Initiator Group as required. Names are of the following format:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:327(para)
msgid "Hosts attaching to VMAX storage managed by the OpenStack environment cannot also be attached to storage on the same VMAX not being managed by OpenStack. This is due to limitations on VMAX Initiator Group membership."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:335(title)
msgid "FA port groups"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:336(para)
msgid "VMAX array FA ports to be used in a new masking view are chosen from the list provided in the EMC configuration file."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:341(title)
msgid "Storage group names"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:342(para)
msgid "As volumes are attached to a host, they are either added to an existing storage group (if it exists) or a new storage group is created and the volume is then added. Storage groups contain volumes created from a pool (either single-pool or FAST-controlled), attached to a single host, over a single connection type (iSCSI or FC). Names are formed:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:354(title)
msgid "Concatenated or striped volumes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:355(para)
msgid "In order to support later expansion of created volumes, the VMAX Block Storage drivers create concatenated volumes as the default layout. If later expansion is not required, users can opt to create striped volumes in order to optimize I/O performance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-vmax-driver.xml:360(para)
msgid "Below is an example of how to create striped volumes. First, create a volume type. Then define the extra spec for the volume type <literal>storagetype:stripecount</literal> representing the number of meta members in the striped volume. The example below means that each volume created under the <literal>GoldStriped</literal> volume type will be striped and made up of 4 meta members."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:6(title)
msgid "VMware VMDK driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:8(para)
msgid "Use the VMware VMDK driver to enable management of the OpenStack Block Storage volumes on vCenter-managed data stores. Volumes are backed by VMDK files on data stores that use any VMware-compatible storage technology such as NFS, iSCSI, FiberChannel, and vSAN."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:14(para)
msgid "The VMware ESX VMDK driver is deprecated as of the Icehouse release and might be removed in Juno or a subsequent release. The VMware vCenter VMDK driver continues to be fully supported."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:20(title)
msgid "Functional context"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:21(para)
msgid "The VMware VMDK driver connects to vCenter, through which it can dynamically access all the data stores visible from the ESX hosts in the managed cluster."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:24(para)
msgid "When you create a volume, the VMDK driver creates a VMDK file on demand. The VMDK file creation completes only when the volume is subsequently attached to an instance, because the set of data stores visible to the instance determines where to place the volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:29(para)
msgid "The running vSphere VM is automatically reconfigured to attach the VMDK file as an extra disk. Once attached, you can log in to the running vSphere VM to rescan and discover this extra disk."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:36(para)
msgid "The recommended volume driver for OpenStack Block Storage is the VMware vCenter VMDK driver. When you configure the driver, you must match it with the appropriate OpenStack Compute driver from VMware and both drivers must point to the same server."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:41(para)
msgid "In the <filename>nova.conf</filename> file, use this option to define the Compute driver:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:44(para)
msgid "In the <filename>cinder.conf</filename> file, use this option to define the volume driver:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:47(para)
msgid "The following table lists various options that the drivers support for the OpenStack Block Storage configuration (<filename>cinder.conf</filename>):"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:53(title) ./doc/config-reference/compute/section_hypervisor_vmware.xml:660(th)
msgid "VMDK disk type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:54(para)
msgid "The VMware VMDK drivers support the creation of VMDK disk files of type <literal>thin</literal>, <literal>lazyZeroedThick</literal>, or <literal>eagerZeroedThick</literal>. Use the <code>vmware:vmdk_type</code> extra spec key with the appropriate value to specify the VMDK disk file type. The following table captures the mapping between the extra spec entry and the VMDK disk file type:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:63(caption)
msgid "Extra spec entry to VMDK disk file type mapping"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:67(td)
msgid "Disk file type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:68(td) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:114(td)
msgid "Extra spec key"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:69(td) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:115(td)
msgid "Extra spec value"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:74(td) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:76(td) ./doc/config-reference/compute/section_hypervisor_vmware.xml:671(td)
msgid "thin"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:75(td) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:80(td) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:85(td)
msgid "vmware:vmdk_type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:79(td)
msgid "lazyZeroedThick"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:81(td)
msgid "thick"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:84(td) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:86(td)
msgid "eagerZeroedThick"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:90(para)
msgid "If you do not specify a <code>vmdk_type</code> extra spec entry, the default disk file type is <literal>thin</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:93(para)
msgid "The following example shows how to create a <code>lazyZeroedThick</code> VMDK volume by using the appropriate <code>vmdk_type</code>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:101(title) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:113(td)
msgid "Clone type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:102(para)
msgid "With the VMware VMDK drivers, you can create a volume from another source volume or a snapshot point. The VMware vCenter VMDK driver supports the <literal>full</literal> and <literal>linked/fast</literal> clone types. Use the <code>vmware:clone_type</code> extra spec key to specify the clone type. The following table captures the mapping for clone types:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:110(caption)
msgid "Extra spec entry to clone type mapping"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:120(td) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:122(td)
msgid "full"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:121(td) ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:126(td)
msgid "vmware:clone_type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:125(td)
msgid "linked/fast"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:127(td)
msgid "linked"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:131(para)
msgid "If you do not specify the clone type, the default is <literal>full</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:133(para)
msgid "The following example shows linked cloning from another source volume:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:139(para)
msgid "The VMware ESX VMDK driver ignores the extra spec entry and always creates a <literal>full</literal> clone."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:145(title)
msgid "Use vCenter storage policies to specify back-end data stores"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:148(para)
msgid "This section describes how to configure back-end data stores using storage policies. In vCenter, you can create one or more storage policies and expose them as a Block Storage volume-type to a vmdk volume. The storage policies are exposed to the vmdk driver through the extra spec property with the <literal>vmware:storage_profile</literal> key."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:155(para)
msgid "For example, assume a storage policy in vCenter named <literal>gold_policy.</literal> and a Block Storage volume type named <literal>vol1</literal> with the extra spec key <literal>vmware:storage_profile</literal> set to the value <literal>gold_policy</literal>. Any Block Storage volume creation that uses the <literal>vol1</literal> volume type places the volume only in data stores that match the <literal>gold_policy</literal> storage policy."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:164(para)
msgid "The Block Storage back-end configuration for vSphere data stores is automatically determined based on the vCenter configuration. If you configure a connection to connect to vCenter version 5.5 or later in the <filename>cinder.conf</filename> file, the use of storage policies to configure back-end data stores is automatically supported."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:172(para)
msgid "You must configure any data stores that you configure for the Block Storage service for the Compute service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:177(title)
msgid "To configure back-end data stores by using storage policies"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:180(para)
msgid "In vCenter, tag the data stores to be used for the back end."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:182(para)
msgid "OpenStack also supports policies that are created by using vendor-specific capabilities; for example vSAN-specific storage policies."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:186(para)
msgid "The tag value serves as the policy. For details, see <xref linkend=\"vmware-spbm\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:192(para)
msgid "Set the extra spec key <literal>vmware:storage_profile</literal> in the desired Block Storage volume types to the policy name that you created in the previous step."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:199(para)
msgid "Optionally, for the <parameter>vmware_host_version</parameter> parameter, enter the version number of your vSphere platform. For example, <placeholder-1/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:204(para)
msgid "This setting overrides the default location for the corresponding WSDL file. Among other scenarios, you can use this setting to prevent WSDL error messages during the development phase or to work with a newer version of vCenter."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:211(para) ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:118(para) ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:181(para)
msgid "Complete the other vCenter configuration parameters as appropriate."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:216(para)
msgid "The following considerations apply to configuring SPBM for the Block Storage service:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:220(para)
msgid "Any volume that is created without an associated policy (that is to say, without an associated volume type that specifies <literal>vmware:storage_profile</literal> extra spec), there is no policy-based placement for that volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:232(para)
msgid "The VMware vCenter and ESX VMDK drivers support these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:238(para)
msgid "When a volume is attached to an instance, a reconfigure operation is performed on the instance to add the volume's VMDK to it. The user must manually rescan and mount the device from within the guest operating system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:247(para)
msgid "Allowed only if volume is not attached to an instance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:257(para)
msgid "Only images in <literal>vmdk</literal> disk format with <literal>bare</literal> container format are supported. The <option>vmware_disktype</option> property of the image can be <literal>preallocated</literal>, <literal>sparse</literal>, <literal>streamOptimized</literal> or <literal>thin</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:272(para)
msgid "Allowed only if the volume is not attached to an instance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:276(para)
msgid "This operation creates a <literal>streamOptimized</literal> disk image."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:285(para)
msgid "Supported only if the source volume is not attached to an instance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:290(para)
msgid "Backup a volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:292(para)
msgid "This operation creates a backup of the volume in <literal>streamOptimized</literal> disk format."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:297(para)
msgid "Restore backup to new or existing volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:299(para)
msgid "Supported only if the existing volume doesn't contain snapshots."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:304(para)
msgid "Change the type of a volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:306(para)
msgid "This operation is supported only if the volume state is <literal>available</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:313(para)
msgid "Although the VMware ESX VMDK driver supports these operations, it has not been extensively tested."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:319(title)
msgid "Storage policy-based configuration in vCenter"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:320(para)
msgid "You can configure Storage Policy-Based Management (SPBM) profiles for vCenter data stores supporting the Compute, Image Service, and Block Storage components of an OpenStack implementation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:324(para)
msgid "In a vSphere OpenStack deployment, SPBM enables you to delegate several data stores for storage, which reduces the risk of running out of storage space. The policy logic selects the data store based on accessibility and available storage space."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:334(para)
msgid "Determine the data stores to be used by the SPBM policy."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:338(para)
msgid "Determine the tag that identifies the data stores in the OpenStack component configuration."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:343(para)
msgid "Create separate policies or sets of data stores for separate OpenStack components."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:349(title)
msgid "Create storage policies in vCenter"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:351(title)
msgid "To create storage policies in vCenter"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:353(para)
msgid "In vCenter, create the tag that identifies the data stores:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:357(para)
msgid "From the Home screen, click <guimenuitem>Tags</guimenuitem>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:361(para)
msgid "Specify a name for the tag."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:364(para)
msgid "Specify a tag category. For example, <filename>spbm-cinder</filename>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:370(para) ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:133(para)
msgid "Apply the tag to the data stores to be used by the SPBM policy."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:373(para) ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:143(para)
msgid "For details about creating tags in vSphere, see the <link href=\"http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-379F40D3-8CD6-449E-89CB-79C4E2683221.html\">vSphere documentation</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:380(para)
msgid "In vCenter, create a tag-based storage policy that uses one or more tags to identify a set of data stores."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:384(para)
msgid "You use this tag name and category when you configure the <filename>*.conf</filename> file for the OpenStack component. For details about creating tags in vSphere, see the <link href=\"http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.storage.doc/GUID-89091D59-D844-46B2-94C2-35A3961D23E7.html\">vSphere documentation</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:395(title)
msgid "Data store selection"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:396(para)
msgid "If storage policy is enabled, the driver initially selects all the data stores that match the associated storage policy."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:399(para)
msgid "If two or more data stores match the storage policy, the driver chooses a data store that is connected to the maximum number of hosts."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:402(para)
msgid "In case of ties, the driver chooses the data store with lowest space utilization, where space utilization is defined by the <literal>(1-freespace/totalspace)</literal> metric."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:407(para)
msgid "These actions reduce the number of volume migrations while attaching the volume to instances."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml:409(para)
msgid "The volume must be migrated if the ESX host for the instance cannot access the data store that contains the volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:9(title)
msgid "HDS HNAS iSCSI and NFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:11(para)
msgid "This Block Storage volume driver provides iSCSI and NFS support for <link href=\"http://www.hds.com/products/file-and-content/network-attached-storage/\">HNAS (Hitachi Network-attached Storage)</link> arrays such as, HNAS 3000 and 4000 family."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:17(para)
msgid "Use the HDS <placeholder-1/> command to communicate with an HNAS array. This utility package is available in the physical media distributed with the hardware or it can be copied from the SMU (<filename>/usr/local/bin/ssc</filename>)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:25(para)
msgid "The base NFS driver combined with the HNAS driver extensions support these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:58(para)
msgid "The HDS driver supports the concept of differentiated services (also referred to as quality of service) by mapping volume types to services provided through HNAS. HNAS supports a variety of storage options and file system capabilities which are selected through volume typing and the use of multiple back-ends. The HDS driver maps up to 4 volume types into separate exports/filesystems, and can support any number using multiple back-ends."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:65(para)
msgid "Configuration is read from an XML-formatted file (one per backend). Examples are shown for single and multi back-end cases."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:77(para)
msgid "The <literal>default</literal> volume type needs to be set in configuration file. If there is no <literal>default</literal> volume type, only matching volume types will work."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:86(title)
msgid "HNAS setup"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:87(para)
msgid "Before using iSCSI and NFS services, use the HNAS Web Interface to create storage pool(s), filesystem(s), and assign an EVS. For NFS, NFS exports should be created. For iSCSI, a SCSI Domain needs to be set."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:93(para)
msgid "In a single back-end deployment, only one OpenStack Block Storage instance runs on the OpenStack Block Storage server and controls one HNAS array: this deployment requires these configuration files:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:99(para)
msgid "Set the <option>hds_hnas_iscsi_config_file</option> option in the <filename>/etc/cinder/cinder.conf</filename> file to use the HNAS iSCSI volume driver. Or <option>hds_hnas_nfs_config_file</option> to use HNAS NFS driver. This option points to a configuration file.<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:111(para)
msgid "For HNAS iSCSI driver:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:114(para)
msgid "For HNAS NFS driver:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:119(para)
msgid "For HNAS iSCSI, configure <option>hds_hnas_iscsi_config_file</option> at the location specified previously. For example, <filename>/opt/hds/hnas/cinder_iscsi_conf.xml</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:137(para)
msgid "For HNAS NFS, configure <option>hds_hnas_nfs_config_file</option> at the location specified previously. For example, <filename>/opt/hds/hnas/cinder_nfs_conf.xml</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:156(para)
msgid "Up to 4 service stanzas can be included in the XML file; named <literal>svc_0</literal>, <literal>svc_1</literal>, <literal>svc_2</literal> and <literal>svc_3</literal>. Additional services can be enabled using multi-backend as described below."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:164(para)
msgid "In a multi back-end deployment, more than one OpenStack Block Storage instance runs on the same server. In this example, two HNAS arrays are used, possibly providing different storage performance:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:170(para)
msgid "For HNAS iSCSI, configure <filename>/etc/cinder/cinder.conf</filename>: the <literal>hnas1</literal> and <literal>hnas2</literal> configuration blocks are created. Set the <option>hds_hnas_iscsi_config_file</option> option to point to an unique configuration file for each block. Set the <option>volume_driver</option> option for each back-end to <literal>cinder.volume.drivers.hds.iscsi.HDSISCSIDriver</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:194(para)
msgid "Configure the <filename>/opt/hds/hnas/cinder_iscsi1_conf.xml</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:212(para)
msgid "Configure the <filename>/opt/hds/hnas/cinder_iscsi2_conf.xml</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:232(para)
msgid "For NFS, configure <filename>/etc/cinder/cinder.conf</filename>: the <literal>hnas1</literal> and <literal>hnas2</literal> configuration blocks are created. Set the <option>hds_hnas_nfs_config_file</option> option to point to an unique configuration file for each block. Set the <option>volume_driver</option> option for each back-end to <literal>cinder.volume.drivers.hds.nfs.HDSNFSDriver</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:256(para)
msgid "Configure the <filename>/opt/hds/hnas/cinder_nfs1_conf.xml</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:273(para)
msgid "Configure the <filename>/opt/hds/hnas/cinder_nfs2_conf.xml</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:294(para)
msgid "If you use volume types, you must configure them in the configuration file and set the <option>volume_backend_name</option> option to the appropriate back-end. In the previous multi back-end example, the <literal>platinum</literal> volume type is served by hnas-2, and the <literal>regular</literal> volume type is served by hnas-1."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:305(title)
msgid "Non-differentiated deployment of HNAS arrays"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:306(para)
msgid "You can deploy multiple OpenStack HNAS drivers instances that each control a separate HNAS array. Each instance does not need to have a volume type associated with it. The OpenStack Block Storage filtering algorithm selects the HNAS array with the largest available free space. In each configuration file, you must define the <literal>default</literal> volume type in the service labels."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:316(title)
msgid "HDS HNAS volume driver configuration options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:323(para)
msgid "There is no relative precedence or weight among these four labels."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:317(para)
msgid "These details apply to the XML format configuration file that is read by HDS volume driver. These differentiated service labels are predefined: <literal>svc_0</literal>, <literal>svc_1</literal>, <literal>svc_2</literal> and <literal>svc_3</literal><placeholder-1/>. Each respective service label associates with these parameters and tags:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:329(para)
msgid "A create_volume call with a certain volume type shall be matched up with this tag. The value <literal>default</literal> is special in that any service associated with this type is used to create volume when no other labels match. Other labels are case sensitive and should exactly match. If no configured volume types match the incoming requested type, an error occurs in volume creation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:341(para)
msgid "(iSCSI only) Virtual filesystem label associated with the service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:343(para)
msgid "(NFS only) Path to the volume <literal>&lt;ip_address&gt;:/&lt;path&gt;</literal> associated with the service. Additionally, this entry must be added in the file used to list available NFS shares. This file is located, by default, in <filename>/etc/cinder/nfs_shares</filename> or you can specify the location in the <option>nfs_shares_config</option> option in the cinder configuration file."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:353(para)
msgid "(iSCSI only) An iSCSI IP address dedicated to the service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:363(para)
msgid "The <code>get_volume_stats()</code> function always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:357(para)
msgid "Typically a OpenStack Block Storage volume instance has only one such service label. For example, any <literal>svc_0</literal>, <literal>svc_1</literal>, <literal>svc_2</literal> or <literal>svc_3</literal> can be associated with it. But any mix of these service labels can be used in the same instance <placeholder-1/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:387(para)
msgid "Management Port 0 IP address. Should be the IP address of the 'Admin' EVS."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:392(option)
msgid "hnas_cmd"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:397(para)
msgid "<option>hnas_cmd</option> is a command to communicate to HNAS array."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:402(option)
msgid "chap_enabled"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:407(para)
msgid "(iSCSI only) <option>chap_enabled</option> is a boolean tag used to enable CHAP authentication protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:415(para) ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:424(para)
msgid "supervisor"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:417(para)
msgid "Username is always required on HNAS."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:426(para)
msgid "Password is always required on HNAS."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:438(para)
msgid "Service labels: these four predefined names help four different sets of configuration options. Each can specify HDP and a unique volume type."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:451(para)
msgid "volume_type tag is used to match volume type. <literal>default</literal> meets any type of volume type, or if it is not specified. Any other volume type is selected if exactly matched during volume creation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:467(para)
msgid "(iSCSI only) iSCSI IP address where volume attaches for this volume type."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-hnas-driver.xml:478(para)
msgid "HDP, for HNAS iSCSI is the virtual filesystem label or the path (for HNAS NFS) where volume, or snapshot should be created."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/smbfs-volume-driver.xml:6(title)
msgid "SambaFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/smbfs-volume-driver.xml:7(para)
msgid "There is a volume back-end for Samba filesystems. Set the following in your <filename>cinder.conf</filename>, and use the following options to configure it."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zadara-volume-driver.xml:6(title)
msgid "Zadara"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zadara-volume-driver.xml:7(para)
msgid "There is a volume back-end for Zadara. Set the following in your <filename>cinder.conf</filename>, and use the following options to configure it."
msgstr ""
#: ./doc/config-reference/block-storage/backup/tsm-backup-driver.xml:6(title)
msgid "IBM Tivoli Storage Manager backup driver"
msgstr ""
#: ./doc/config-reference/block-storage/backup/tsm-backup-driver.xml:7(para)
msgid "The IBM Tivoli Storage Manager (TSM) backup driver enables performing volume backups to a TSM server."
msgstr ""
#: ./doc/config-reference/block-storage/backup/tsm-backup-driver.xml:10(para)
msgid "The TSM client should be installed and configured on the machine running the <systemitem class=\"service\">cinder-backup </systemitem> service. See the <citetitle>IBM Tivoli Storage Manager Backup-Archive Client Installation and User's Guide</citetitle> for details on installing the TSM client."
msgstr ""
#: ./doc/config-reference/block-storage/backup/tsm-backup-driver.xml:17(para)
msgid "To enable the IBM TSM backup driver, include the following option in <filename>cinder.conf</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/backup/tsm-backup-driver.xml:20(para)
msgid "The following configuration options are available for the TSM backup driver."
msgstr ""
#: ./doc/config-reference/block-storage/backup/tsm-backup-driver.xml:23(para)
msgid "This example shows the default options for the TSM backup driver."
msgstr ""
#: ./doc/config-reference/block-storage/backup/swift-backup-driver.xml:5(title)
msgid "Swift backup driver"
msgstr ""
#: ./doc/config-reference/block-storage/backup/swift-backup-driver.xml:6(para)
msgid "The backup driver for Swift back-end performs a volume backup to a Swift object storage system."
msgstr ""
#: ./doc/config-reference/block-storage/backup/swift-backup-driver.xml:8(para)
msgid "To enable the Swift backup driver, include the following option in the <filename>cinder.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/backup/swift-backup-driver.xml:12(para)
msgid "The following configuration options are available for the Swift back-end backup driver."
msgstr ""
#: ./doc/config-reference/block-storage/backup/swift-backup-driver.xml:16(para)
msgid "This example shows the default options for the Swift back-end backup driver."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml:5(title)
msgid "Ceph backup driver"
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml:6(para)
msgid "The Ceph backup driver backs up volumes of any type to a Ceph back-end store. The driver can also detect whether the volume to be backed up is a Ceph RBD volume, and if so, it tries to perform incremental and differential backups."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml:11(para)
msgid "For source Ceph RBD volumes, you can perform backups within the same Ceph pool (not recommended). You can also perform backups between different Ceph pools and between different Ceph clusters."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml:15(para)
msgid "At the time of writing, differential backup support in Ceph/librbd was quite new. This driver attempts a differential backup in the first instance. If the differential backup fails, the driver falls back to full backup/copy."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml:20(para)
msgid "If incremental backups are used, multiple backups of the same volume are stored as snapshots so that minimal space is consumed in the backup store. It takes far less time to restore a volume than to take a full copy."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml:26(para)
msgid "Block Storage enables you to:"
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml:29(para)
msgid "Restore to a new volume, which is the default and recommended action."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml:34(para)
msgid "Restore to the original volume from which the backup was taken. The restore action takes a full copy because this is the safest action."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml:42(para)
msgid "To enable the Ceph backup driver, include the following option in the <filename>cinder.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml:46(para)
msgid "The following configuration options are available for the Ceph backup driver."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml:50(para)
msgid "This example shows the default options for the Ceph backup driver."
msgstr ""
#: ./doc/config-reference/database-service/section-databaseservice-db.xml:7(title)
msgid "Configure the database"
msgstr ""
#: ./doc/config-reference/database-service/section-databaseservice-db.xml:9(para)
msgid "Use the options to configure the used databases:"
msgstr ""
#: ./doc/config-reference/database-service/section-databaseservice-rpc.xml:7(title) ./doc/config-reference/orchestration/section_orchestration-rpc.xml:7(title) ./doc/config-reference/image-service/section_image-service-rpc.xml:6(title)
msgid "Configure the RPC messaging system"
msgstr ""
#: ./doc/config-reference/database-service/section-databaseservice-rpc.xml:8(para)
msgid "OpenStack projects use an open standard for messaging middleware known as AMQP. This messaging middleware enables the OpenStack services that run on multiple servers to talk to each other. OpenStack Trove RPC supports three implementations of AMQP: <application>RabbitMQ</application>, <application>Qpid</application>, and <application>ZeroMQ</application>."
msgstr ""
#: ./doc/config-reference/database-service/section-databaseservice-rpc.xml:18(para)
msgid "Use these options to configure the <application>RabbitMQ</application> messaging system:"
msgstr ""
#: ./doc/config-reference/database-service/section-databaseservice-rpc.xml:25(para)
msgid "Use these options to configure the <application>Qpid</application> messaging system:"
msgstr ""
#: ./doc/config-reference/database-service/section-databaseservice-rpc.xml:31(title)
msgid "Configure ZeroMq"
msgstr ""
#: ./doc/config-reference/database-service/section-databaseservice-rpc.xml:32(para)
msgid "Use these options to configure the <application>ZeroMq</application> messaging system:"
msgstr ""
#: ./doc/config-reference/orchestration/section_orchestration-rpc.xml:19(para)
msgid "OpenStack Oslo RPC uses <application>RabbitMQ</application> by default. Use these options to configure the <application>RabbitMQ</application> message system. The <option>rpc_backend</option> option is optional as long as <application>RabbitMQ</application> is the default messaging system. However, if it is included in the configuration, you must set it to <literal>heat.openstack.common.rpc.impl_kombu</literal>."
msgstr ""
#: ./doc/config-reference/orchestration/section_orchestration-rpc.xml:31(para)
msgid "Use these options to configure the <application>RabbitMQ</application> messaging system. You can configure messaging communication for different installation scenarios, tune retries for RabbitMQ, and define the size of the RPC thread pool. To monitor notifications through RabbitMQ, you must set the <option>notification_driver</option> option to <literal>heat.openstack.common.notifier.rpc_notifier</literal> in the <filename>heat.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/orchestration/section_orchestration-rpc.xml:43(para)
msgid "Use these options to configure the <application>Qpid</application> messaging system for OpenStack Oslo RPC. <application>Qpid</application> is not the default messaging system, so you must enable it by setting the <option>rpc_backend</option> option in the <filename>heat.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/orchestration/section_orchestration-rpc.xml:50(para)
msgid "This critical option points the compute nodes to the <application>Qpid</application> broker (server). Set the <option>qpid_hostname</option> option to the host name where the broker runs in the <filename>heat.conf</filename> file."
msgstr ""
#: ./doc/config-reference/orchestration/section_orchestration-rpc.xml:56(para)
msgid "The <option>qpid_hostname</option> option accepts a host name or IP address value."
msgstr ""
#: ./doc/config-reference/orchestration/section_orchestration-rpc.xml:88(para)
msgid "Use these options to configure the <application>ZeroMQ</application> messaging system for OpenStack Oslo RPC. <application>ZeroMQ</application> is not the default messaging system, so you must enable it by setting the <option>rpc_backend</option> option in the <filename>heat.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/orchestration/section_orchestration-api.xml:7(title)
msgid "Configure APIs"
msgstr ""
#: ./doc/config-reference/orchestration/section_orchestration-api.xml:8(para)
msgid "The following options allow configuration of the APIs that Orchestration supports. Currently this includes compatibility APIs for CloudFormation and CloudWatch and a native API."
msgstr ""
#: ./doc/config-reference/orchestration/section_orchestration-clients.xml:7(title)
msgid "Configure Clients"
msgstr ""
#: ./doc/config-reference/orchestration/section_orchestration-clients.xml:8(para)
msgid "The following options allow configuration of the clients that Orchestration uses to talk to other services."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:6(title)
msgid "Image Service sample configuration files"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:7(para)
msgid "You can find the files that are described in this section in the <filename class=\"directory\">/etc/glance/</filename> directory."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:11(title)
msgid "glance-api.conf"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:12(para)
msgid "The configuration file for the Image Service API is found in the <filename>glance-api.conf</filename> file."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:18(title)
msgid "glance-registry.conf"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:19(para)
msgid "Configuration for the Image Service's registry, which stores the metadata about images, is found in the <filename>glance-registry.conf</filename> file."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:26(title)
msgid "glance-api-paste.ini"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:27(para)
msgid "Configuration for the Image Service's API middleware pipeline is found in the <filename>glance-api-paste.ini</filename> file."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:33(title)
msgid "glance-registry-paste.ini"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:34(para)
msgid "The Image Service's middleware pipeline for its registry is found in the <filename>glance-registry-paste.ini</filename> file."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:39(title)
msgid "glance-scrubber.conf"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:40(para)
msgid "<placeholder-1/> is a utility for the Image Service that cleans up images that have been deleted; its configuration is stored in the <filename>glance-scrubber.conf</filename> file."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:43(para)
msgid "Multiple instances of <systemitem>glance-scrubber</systemitem> can be run in a single deployment, but only one of them can be designated as the <systemitem>cleanup_scrubber</systemitem> in the <filename>glance-scrubber.conf</filename> file. The <systemitem>cleanup_scrubber</systemitem> coordinates other <systemitem>glance-scrubber</systemitem> instances by maintaining the master queue of images that need to be removed."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-sample-configuration-files.xml:54(para)
msgid "The <filename>/etc/glance/policy.json</filename> file defines additional access controls that apply to the Image Service."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-rpc.xml:7(para)
msgid "OpenStack projects use an open standard for messaging middleware known as AMQP. This messaging middleware enables the OpenStack services that run on multiple servers to talk to each other. The OpenStack common library project, oslo, supports three implementations of AMQP: <application>RabbitMQ</application>, <application>Qpid</application>, and <application>ZeroMQ</application>."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-rpc.xml:14(para)
msgid "The following tables contain settings to configure the messaging middleware for the Image Service:"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:7(title)
msgid "Configure vCenter data stores for the Image Service back end"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:10(para)
msgid "To use vCenter data stores for the Image Service back end, you must update the <filename>glance-api.conf</filename> file, as follows:"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:15(para)
msgid "Add data store parameters to the <literal>VMware Datastore Store Options</literal> section."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:19(para)
msgid "Specify vSphere as the back end."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:23(para)
msgid "You must configure any configured Image Service data stores for the Compute service."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:30(para)
msgid "If you intend to use multiple data stores for the back end, use the SPBM feature."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:26(para)
msgid "You can specify vCenter data stores directly by using the data store name or Storage Policy Based Management (SPBM), which requires vCenter Server 5.5 or later. For details, see <xref linkend=\"glance-backend-DS\"/>. <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:33(para)
msgid "In the <literal>DEFAULT</literal> section, set the <parameter>default_store</parameter> parameter to <placeholder-1/>, as shown in this code sample:"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:43(para)
msgid "The following table describes the parameters in the <literal>VMware Datastore Store Options</literal> section:"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:47(para)
msgid "The following block of text shows a sample configuration:"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:56(replaceable)
msgid "ADMINISTRATOR"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:64(replaceable)
msgid "DATACENTER"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:67(replaceable)
msgid "datastore1"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:82(replaceable)
msgid "5"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:86(replaceable)
msgid "/openstack_glance"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:91(title)
msgid "Configure vCenter data stores for the back end"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:92(para)
msgid "You can specify a vCenter data store for the back end by setting the <parameter>vmware_datastore_name</parameter> parameter value to the vCenter name of the data store. This configuration limits the back end to a single data store."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:97(para)
msgid "Alternatively, you can specify a SPBM policy, which can comprise multiple vCenter data stores. Both approaches are described."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:101(para)
msgid "SPBM requires vCenter Server 5.5 or later."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:104(title)
msgid "To configure a single data store"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:106(para)
msgid "If present, comment or delete the <parameter>vmware_pbm_wsdl_location</parameter> and <parameter>vmware_pbm_policy</parameter> parameters."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:112(para)
msgid "Uncomment and define the <parameter>vmware_datastore_name</parameter> parameter with the name of the vCenter data store."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:123(title)
msgid "To configure multiple data stores using SPBM"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:126(para)
msgid "In vCenter, use tagging to identify the data stores and define a storage policy:"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:130(para)
msgid "Create the tag."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:137(para)
msgid "Create a tag-based storage policy that uses one or more tags to identify a set of data stores."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:147(para)
msgid "For details about storage policies in vSphere, see the <link href=\"http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.storage.doc/GUID-A8BA9141-31F1-4555-A554-4B5B04D75E54.html\">vSphere documentation</link>."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:154(para)
msgid "Return to the <filename>glance-api.conf</filename> file."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:159(para)
msgid "Comment or delete the <parameter>vmware_datastore_name</parameter> parameter."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:164(para)
msgid "Uncomment and define the <parameter>vmware_pbm_policy</parameter> parameter by entering the same value as the tag you defined and applied to the data stores in vCenter."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:171(para)
msgid "Uncomment and define the <parameter>vmware_pbm_wsdl_location</parameter> parameter by entering the location of the PBM service WSDL file. For example, <filename>file:///opt/SDK/spbm/wsdl/pbmService.wsdl</filename>."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backend-vmware.xml:177(para)
msgid "If you do not set this parameter, the storage policy cannot be used to place images in the data store."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-api.xml:6(title)
msgid "Configure the API"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-api.xml:7(para)
msgid "The Image Service has two APIs: the user-facing API, and the registry API, which is for internal requests that require access to the database."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-api.xml:10(para)
msgid "Both of the APIs currently have two major versions, v1 and v2. It is possible to run either or both versions, by setting appropriate values of <literal>enable_v1_api</literal>, <literal>enable_v2_api</literal>, <literal>enable_v1_registry</literal> and <literal>enable_v2_registry</literal>. If the v2 API is used, running <systemitem class=\"service\">glance-registry</systemitem> is optional, as v2 of <systemitem class=\"service\">glance-api</systemitem> can connect directly to the database."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-api.xml:18(para)
msgid "Tables of all the options used to configure the APIs, including enabling SSL and modifying WSGI settings are found below."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backends.xml:7(title)
msgid "Configure back ends"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backends.xml:11(para)
msgid "OpenStack Block Storage (cinder)"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backends.xml:14(para)
msgid "A directory on a local file system"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backends.xml:17(para)
msgid "GridFS"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backends.xml:20(para)
msgid "Ceph RBD"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backends.xml:23(para)
msgid "Amazon S3"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backends.xml:26(para)
msgid "Sheepdog"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backends.xml:29(para)
msgid "OpenStack Object Storage (swift)"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backends.xml:32(para)
msgid "VMware ESX"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-backends.xml:8(para)
msgid "The Image Service supports several back ends for storing virtual machine images:<placeholder-1/> The following tables detail the options available for each."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-ISO-support.xml:7(title)
msgid "Support for ISO images"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-ISO-support.xml:8(para)
msgid "You can load ISO images into the Image Service. You can subsequently boot an ISO image using Compute."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-ISO-support.xml:11(title)
msgid "To load an ISO image to an Image Service data store"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-ISO-support.xml:14(para)
msgid "Obtain the ISO image. For example, <filename>ubuntu-13.04-server-amd64.iso</filename>."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-ISO-support.xml:18(para)
msgid "In the Image Service, run the following command:"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-ISO-support.xml:23(para)
msgid "In this command, <literal>ubuntu.iso</literal> is the name for the ISO image after it is loaded to the Image Service, and <literal>ubuntu-13.04-server-amd64.iso</literal> is the name of the source ISO image."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-ISO-support.xml:30(para)
msgid "Optionally, confirm the upload in Compute."
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-ISO-support.xml:31(para) ./doc/config-reference/image-service/section_image-service-ISO-support.xml:38(para)
msgid "Run this command:"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-ISO-support.xml:36(title)
msgid "To boot an instance from an ISO image"
msgstr ""
#: ./doc/config-reference/image-service/section_image-service-ISO-support.xml:41(para)
msgid "In this command, <literal>ubuntu.iso</literal> is the ISO image, and <literal>instance_name</literal> is the name of the new instance."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-backing-storage.xml:6(title)
msgid "Configure Compute backing storage"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-backing-storage.xml:7(para)
msgid "Backing Storage is the storage used to provide the expanded operating system image, and any ephemeral storage. Inside the virtual machine, this is normally presented as two virtual hard disks (for example, <filename>/dev/vda</filename> and <filename>/dev/vdb</filename> respectively). However, inside OpenStack, this can be derived from one of three methods: LVM, QCOW or RAW, chosen using the <literal>images_type</literal> option in <filename>nova.conf</filename> on the compute node."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-backing-storage.xml:17(para)
msgid "QCOW is the default backing store. It uses a copy-on-write philosophy to delay allocation of storage until it is actually needed. This means that the space required for the backing of an image can be significantly less on the real disk than what seems available in the virtual machine operating system."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-backing-storage.xml:24(para)
msgid "RAW creates files without any sort of file formatting, effectively creating files with the plain binary one would normally see on a real disk. This can increase performance, but means that the entire size of the virtual disk is reserved on the physical disk."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-backing-storage.xml:30(para)
msgid "Local <link href=\"http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)\">LVM volumes</link> can also be used. Set <literal>images_volume_group = nova_local</literal> where <literal>nova_local</literal> is the name of the LVM group you have created."
msgstr ""
#: ./doc/config-reference/compute/section_xapi-ami-setup.xml:8(title)
msgid "Prepare for AMI type images"
msgstr ""
#: ./doc/config-reference/compute/section_xapi-ami-setup.xml:9(para)
msgid "To support AMI type images in your OpenStack installation, you must create the <filename>/boot/guest</filename> directory on dom0. One of the OpenStack XAPI plugins will extract the kernel and ramdisk from AKI and ARI images and put them to that directory."
msgstr ""
#: ./doc/config-reference/compute/section_xapi-ami-setup.xml:15(para)
msgid "OpenStack maintains the contents of this directory and its size should not increase during normal operation. However, in case of power failures or accidental shutdowns, some files might be left over. To prevent these files from filling up dom0's filesystem, set up this directory as a symlink that points to a subdirectory of the local SR."
msgstr ""
#: ./doc/config-reference/compute/section_xapi-ami-setup.xml:22(para)
msgid "Run these commands in dom0 to achieve this setup:"
msgstr ""
#: ./doc/config-reference/compute/section_rpc.xml:9(para)
msgid "OpenStack projects use AMQP, an open standard for messaging middleware. OpenStack services that run on multiple servers to talk to each other. OpenStack Oslo RPC supports three implementations of AMQP: <application>RabbitMQ</application>, <application>Qpid</application>, and <application>ZeroMQ</application>."
msgstr ""
#: ./doc/config-reference/compute/section_rpc.xml:17(para)
msgid "OpenStack Oslo RPC uses <application>RabbitMQ</application> by default. Use these options to configure the <application>RabbitMQ</application> message system. The <literal>rpc_backend</literal> option is not required as long as <application>RabbitMQ</application> is the default messaging system. However, if it is included the configuration, you must set it to <literal>nova.openstack.common.rpc.impl_kombu</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_rpc.xml:26(para)
msgid "You can use these additional options to configure the <application>RabbitMQ</application> messaging system. You can configure messaging communication for different installation scenarios, tune retries for RabbitMQ, and define the size of the RPC thread pool. To monitor notifications through RabbitMQ, you must set the <option>notification_driver</option> option to <literal>nova.openstack.common.notifier.rpc_notifier</literal> in the <filename>nova.conf</filename> file. The default for sending usage data is sixty seconds plus a random number of seconds from zero to sixty."
msgstr ""
#: ./doc/config-reference/compute/section_rpc.xml:40(para)
msgid "Use these options to configure the <application>Qpid</application> messaging system for OpenStack Oslo RPC. <application>Qpid</application> is not the default messaging system, so you must enable it by setting the <option>rpc_backend</option> option in the <filename>nova.conf</filename> file."
msgstr ""
#: ./doc/config-reference/compute/section_rpc.xml:47(para)
msgid "This critical option points the compute nodes to the <application>Qpid</application> broker (server). Set <option>qpid_hostname</option> to the host name where the broker runs in the <filename>nova.conf</filename> file."
msgstr ""
#: ./doc/config-reference/compute/section_rpc.xml:69(para)
msgid "This table lists additional options that you use to configure the Qpid messaging driver for OpenStack Oslo RPC. These options are used infrequently."
msgstr ""
#: ./doc/config-reference/compute/section_rpc.xml:76(para)
msgid "Use these options to configure the <application>ZeroMQ</application> messaging system for OpenStack Oslo RPC. <application>ZeroMQ</application> is not the default messaging system, so you must enable it by setting the <option>rpc_backend</option> option in the <filename>nova.conf</filename> file."
msgstr ""
#: ./doc/config-reference/compute/section_rpc.xml:86(para)
msgid "Use these options to configure the <application>RabbitMQ</application> and <application>Qpid</application> messaging drivers."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-db.xml:6(title)
msgid "Database configuration"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-db.xml:7(para)
msgid "You can configure OpenStack Compute to use any SQLAlchemy-compatible database. The database name is <literal>nova</literal>. The <systemitem class=\"service\">nova-conductor</systemitem> service is the only service that writes to the database. The other Compute services access the database through the <systemitem class=\"service\">nova-conductor</systemitem> service."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-db.xml:14(para)
msgid "To ensure that the database schema is current, run the following command:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-db.xml:16(para)
msgid "If <systemitem class=\"service\">nova-conductor</systemitem> is not used, entries to the database are mostly written by the <systemitem class=\"service\">nova-scheduler</systemitem> service, although all services must be able to update entries in the database."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-db.xml:21(para)
msgid "In either case, use the configuration option settings documented in <xref linkend=\"config_table_nova_database\"/> to configure the connection string for the nova database."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:7(title)
msgid "Overview of nova.conf"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:8(para)
msgid "The <filename>nova.conf</filename> configuration file is an <link href=\"https://en.wikipedia.org/wiki/INI_file\">INI file format</link> as explained in <xref linkend=\"config_format\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:12(para)
msgid "You can use a particular configuration option file by using the <literal>option</literal> (<filename>nova.conf</filename>) parameter when you run one of the <literal>nova-*</literal> services. This parameter inserts configuration option definitions from the specified configuration file name, which might be useful for debugging or performance tuning."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:18(para)
msgid "For a list of configuration options, see the tables in this guide."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:20(para)
msgid "To learn more about the <filename>nova.conf</filename> configuration file, review the general purpose configuration options documented in <xref linkend=\"config_table_nova_common\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:24(para)
msgid "Do not specify quotes around Nova options."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:27(title)
msgid "Sections"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:32(literal) ./doc/config-reference/compute/section_compute-scheduler.xml:776(td) ./doc/config-reference/compute/section_compute-scheduler.xml:786(td) ./doc/config-reference/compute/section_compute-scheduler.xml:798(td)
msgid "[DEFAULT]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:34(para)
msgid "Contains most configuration options. If the documentation for a configuration option does not specify its section, assume that it appears in this section."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:42(literal)
msgid "[baremetal]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:44(para)
msgid "Configures the baremetal hypervisor driver."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:49(literal) ./doc/config-reference/compute/section_compute-scheduler.xml:880(td) ./doc/config-reference/compute/section_compute-scheduler.xml:888(td) ./doc/config-reference/compute/section_compute-scheduler.xml:895(td) ./doc/config-reference/compute/section_compute-scheduler.xml:902(td) ./doc/config-reference/compute/section_compute-scheduler.xml:912(td)
msgid "[cells]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:51(para)
msgid "Configures cells functionality. For details, see <xref linkend=\"section_compute-cells\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:59(literal)
msgid "[conductor]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:61(para)
msgid "Configures the <systemitem class=\"service\">nova-conductor</systemitem> service."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:68(literal)
msgid "[database]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:70(para)
msgid "Configures the database that Compute uses."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:76(literal)
msgid "[glance]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:78(para)
msgid "Configures how to access the Image Service."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:84(literal)
msgid "[hyperv]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:86(para)
msgid "Configures the Hyper-V hypervisor driver."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:92(literal)
msgid "[image_file_url]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:94(para)
msgid "Configures additional filesystems to access the Image Service."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:101(literal)
msgid "[keymgr]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:103(para)
msgid "Configures the key manager."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:109(literal)
msgid "[keystone_authtoken]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:111(para)
msgid "Configures authorization via Identity service."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:117(literal)
msgid "[libvirt]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:119(para)
msgid "Configures the hypervisor drivers using the Libvirt library: KVM, LXC, Qemu, UML, Xen."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:126(literal)
msgid "[matchmaker_redis]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:128(para)
msgid "Configures a Redis server."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:134(literal)
msgid "[matchmaker_ring]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:136(para)
msgid "Configures a matchmaker ring."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:142(literal) ./doc/config-reference/compute/section_compute-scheduler.xml:807(td) ./doc/config-reference/compute/section_compute-scheduler.xml:813(td) ./doc/config-reference/compute/section_compute-scheduler.xml:823(td) ./doc/config-reference/compute/section_compute-scheduler.xml:845(td)
msgid "[metrics]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:144(para)
msgid "Configures weights for the metrics weighter."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:150(literal)
msgid "[neutron]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:152(para)
msgid "Configures Networking specific options."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:158(literal)
msgid "[osapi_v3]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:160(para)
msgid "Configures the OpenStack Compute API v3."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:166(literal)
msgid "[rdp]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:168(para)
msgid "Configures RDP proxying."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:174(literal)
msgid "[serial_console]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:176(para)
msgid "Configures serial console."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:182(literal)
msgid "[spice]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:184(para)
msgid "Configures virtual consoles using SPICE."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:190(literal)
msgid "[ssl]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:192(para)
msgid "Configures certificate authority using SSL."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:198(literal)
msgid "[trusted_computing]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:200(para)
msgid "Configures the trusted computing pools functionality and how to connect to a remote attestation service."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:207(literal)
msgid "[upgrade_levels]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:209(para)
msgid "Configures version locking on the RPC (message queue) communications between the various Compute services to allow live upgrading an OpenStack installation."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:218(literal)
msgid "[vmware]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:220(para)
msgid "Configures the VMware hypervisor driver."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:226(literal)
msgid "[xenserver]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:228(para)
msgid "Configures the XenServer hypervisor driver."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:234(literal)
msgid "[zookeeper]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:236(para)
msgid "Configures the ZooKeeper ServiceGroup driver."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml:28(para)
msgid "Configuration options are grouped by section. The Compute configuration file supports the following sections: <placeholder-1/>"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/compute/section_compute-config-samples.xml:41(None)
msgid "@@image: '../../common/figures/SCH_5004_V00_NUAC-Network_mode_KVM_Flat_OpenStack.png'; md5=1e883ef27e5912b5c516d153b8844a28"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/compute/section_compute-config-samples.xml:80(None)
msgid "@@image: '../../common/figures/SCH_5005_V00_NUAC-Network_mode_XEN_Flat_OpenStack.png'; md5=3b151435a0fda3702d4fac5a964fac83"
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml:6(title)
msgid "Example <filename>nova.conf</filename> configuration files"
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml:8(para)
msgid "The following sections describe the configuration options in the <filename>nova.conf</filename> file. You must copy the <filename>nova.conf</filename> file to each compute node. The sample <filename>nova.conf</filename> files show examples of specific configurations."
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml:14(title)
msgid "Small, private cloud"
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml:15(para)
msgid "This example <filename>nova.conf</filename> file configures a small private cloud with cloud controller services, database server, and messaging server on the same server. In this case, CONTROLLER_IP represents the IP address of a central server, BRIDGE_INTERFACE represents the bridge such as br100, the NETWORK_INTERFACE represents an interface to your VLAN setup, and passwords are represented as DB_PASSWORD_COMPUTE for your Compute (nova) database password, and RABBIT PASSWORD represents the password to your message queue installation."
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml:28(title) ./doc/config-reference/compute/section_compute-config-samples.xml:35(title) ./doc/config-reference/compute/section_compute-config-samples.xml:74(title)
msgid "KVM, Flat, MySQL, and Glance, OpenStack or EC2 API"
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml:30(para)
msgid "This example <filename>nova.conf</filename> file, from an internal Rackspace test system, is used for demonstrations."
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml:47(title)
msgid "XenServer, Flat networking, MySQL, and Glance, OpenStack API"
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml:49(para)
msgid "This example <filename>nova.conf</filename> file is from an internal Rackspace test system."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:12(title)
msgid "Hyper-V virtualization platform"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:21(emphasis)
msgid "Windows Server 2008 R2"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:22(para)
msgid "Both Server and Server Core with the Hyper-V role enabled (Shared Nothing Live migration is not supported using 2008 R2)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:26(emphasis)
msgid "Windows Server 2012 and Windows Server 2012 R2"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:27(para)
msgid "Server and Core (with the Hyper-V role enabled), and Hyper-V Server"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:13(para)
msgid "It is possible to use Hyper-V as a compute node within an OpenStack Deployment. The <systemitem class=\"service\">nova-compute</systemitem> service runs as \"openstack-compute,\" a 32-bit service directly upon the Windows platform with the Hyper-V role enabled. The necessary Python components as well as the <systemitem class=\"service\">nova-compute</systemitem> service are installed directly onto the Windows platform. Windows Clustering Services are not needed for functionality within the OpenStack infrastructure. The use of the Windows Server 2012 platform is recommend for the best experience and is the platform for active development. The following Windows platforms have been tested as compute nodes:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:32(title)
msgid "Hyper-V configuration"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:33(para)
msgid "The only OpenStack services required on a Hyper-V node are <systemitem class=\"service\">nova-compute</systemitem> and <systemitem class=\"service\">neutron-hyperv-agent</systemitem>. Regarding the resources needed for this host you have to consider that Hyper-V will require 16GB - 20GB of disk space for the OS itself, including updates. Two NICs are required, one connected to the management network and one to the guest data network."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:41(para)
msgid "The following sections discuss how to prepare the Windows Hyper-V node for operation as an OpenStack compute node. Unless stated otherwise, any configuration information should work for the Windows 2008 R2, 2012 and 2012 R2 platforms."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:47(title)
msgid "Local storage considerations"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:48(para)
msgid "The Hyper-V compute node needs to have ample storage for storing the virtual machine images running on the compute nodes. You may use a single volume for all, or partition it into an OS volume and VM volume. It is up to the individual deploying to decide."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:54(title)
msgid "Configure NTP"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:55(para)
msgid "Network time services must be configured to ensure proper operation of the OpenStack nodes. To set network time on your Windows host you must run the following commands:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:59(para)
msgid "Keep in mind that the node will have to be time synchronized with the other nodes of your OpenStack environment, so it is important to use the same NTP server. Note that in case of an Active Directory environment, you may do this only for the AD Domain Controller."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:65(title)
msgid "Configure Hyper-V virtual switching"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:66(para)
msgid "Information regarding the Hyper-V virtual Switch can be located here: <link href=\"http://technet.microsoft.com/en-us/library/hh831823.aspx\">http://technet.microsoft.com/en-us/library/hh831823.aspx</link>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:70(para)
msgid "To quickly enable an interface to be used as a Virtual Interface the following PowerShell may be used:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:73(replaceable)
msgid "YOUR_BRIDGE_NAME"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:73(option)
msgid "-AllowManagementOS"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:74(para)
msgid "It is very important to make sure that when you are using an Hyper-V node with only 1 NIC the -AllowManagementOS option is set on <literal>True</literal>, otherwise you will lose connectivity to the Hyper-V node."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:79(title)
msgid "Enable iSCSI initiator service"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:80(para)
msgid "To prepare the Hyper-V node to be able to attach to volumes provided by cinder you must first make sure the Windows iSCSI initiator service is running and started automatically."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:87(title)
msgid "Configure shared nothing live migration"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:88(para)
msgid "Detailed information on the configuration of live migration can be found here: <link href=\"http://technet.microsoft.com/en-us/library/jj134199.aspx\">http://technet.microsoft.com/en-us/library/jj134199.aspx</link>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:91(para)
msgid "The following outlines the steps of shared nothing live migration."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:94(para)
msgid "The target hosts ensures that live migration is enabled and properly configured in Hyper-V."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:98(para)
msgid "The target hosts checks if the image to be migrated requires a base VHD and pulls it from the Image Service if not already available on the target host."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:104(para)
msgid "The source hosts ensures that live migration is enabled and properly configured in Hyper-V."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:108(para)
msgid "The source hosts initiates a Hyper-V live migration."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:111(para)
msgid "The source hosts communicates to the manager the outcome of the operation."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:119(literal)
msgid "instances_shared_storage = False"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:120(para)
msgid "This needed to support \"shared nothing\" Hyper-V live migrations. It is used in nova/compute/manager.py"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:124(literal)
msgid "limit_cpu_features = True"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:125(para)
msgid "This flag is needed to support live migration to hosts with different CPU features. This flag is checked during instance creation in order to limit the CPU features used by the VM."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:131(literal)
msgid "instances_path = DRIVELETTER:\\PATH\\TO\\YOUR\\INSTANCES"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:115(para)
msgid "The following two configuration options/flags are needed in order to support Hyper-V live migration and must be added to your <filename>nova.conf</filename> on the Hyper-V compute node:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:135(para)
msgid "Additional Requirements:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:138(para)
msgid "Hyper-V 2012 R2 or Windows Server 2012 R2 with Hyper-V role enabled"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:141(para)
msgid "A Windows domain controller with the Hyper-V compute nodes as domain members"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:145(para)
msgid "The instances_path command-line option/flag needs to be the same on all hosts."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:149(para)
msgid "The <systemitem class=\"service\">openstack-compute</systemitem> service deployed with the setup must run with domain credentials. You can set the service credentials with:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:157(emphasis)
msgid "How to setup live migration on Hyper-V"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:158(para)
msgid "To enable 'shared nothing live' migration, run the 3 PowerShell instructions below on each Hyper-V host:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:164(para)
msgid "Please replace the <replaceable>IP_ADDRESS</replaceable> with the address of the interface which will provide the virtual switching for nova-network."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:167(emphasis)
msgid "Additional Reading"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:168(para)
msgid "Here's an article that clarifies the various live migration options in Hyper-V:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:172(link)
msgid "http://ariessysadmin.blogspot.ro/2012/04/hyper-v-live-migration-of-windows.html"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:177(title)
msgid "Install nova-compute using OpenStack Hyper-V installer"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:179(para)
msgid "In case you want to avoid all the manual setup, you can use Cloudbase Solutions' installer. You can find it here:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:183(link)
msgid "https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:185(para)
msgid "It installs an independent Python environment, in order to avoid conflicts with existing applications, generates dynamically a <filename>nova.conf</filename> file based on the parameters provided by you."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:187(para)
msgid "The installer can also be used for an automated and unattended mode for deployments on a massive number of servers. More details about how to use the installer and its features can be found here:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:190(link)
msgid "https://www.cloudbase.it"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:195(title)
msgid "Python"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:196(para)
msgid "Python 2.7 32bit must be installed as most of the libraries are not working properly on the 64bit version."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:199(title)
msgid "Setting up Python prerequisites"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:201(para)
msgid "Download and then install it using the MSI installer from here:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:204(link)
msgid "http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:213(para)
msgid "Make sure that the <filename>Python</filename> and <filename>Python\\Scripts</filename> paths are set up in the <envar>PATH</envar> environment variable."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:223(title)
msgid "Python dependencies"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:224(para)
msgid "The following packages need to be downloaded and manually installed:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:228(package)
msgid "setuptools"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:231(link)
msgid "http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11.win32-py2.7.exel"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:236(package)
msgid "pip"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:239(link)
msgid "http://pip.readthedocs.org/en/latest/installing.html"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:244(package)
msgid "MySQL-python"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:246(link)
msgid "http://codegood.com/download/10/"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:251(package)
msgid "PyWin32"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:254(link)
msgid "http://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/pywin32-217.win32-py2.7.exe"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:259(package)
msgid "Greenlet"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:262(link)
msgid "http://www.lfd.uci.edu/~gohlke/pythonlibs/#greenlet"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:267(package)
msgid "PyCryto"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:270(link)
msgid "http://www.voidspace.org.uk/downloads/pycrypto26/pycrypto-2.6.win32-py2.7.exe"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:275(para)
msgid "The following packages must be installed with pip:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:278(package)
msgid "ecdsa"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:281(package)
msgid "amqp"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:284(package)
msgid "wmi"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:292(title)
msgid "Other dependencies"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:293(para)
msgid "<literal>qemu-img</literal> is required for some of the image related operations. You can get it from here: <link href=\"http://qemu.weilnetz.de/\">http://qemu.weilnetz.de/</link>. You must make sure that the <literal>qemu-img</literal> path is set in the PATH environment variable."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:298(para)
msgid "Some Python packages need to be compiled, so you may use MinGW or Visual Studio. You can get MinGW from here: <link href=\"http://sourceforge.net/projects/mingw/\"> http://sourceforge.net/projects/mingw/</link>. You must configure which compiler to be used for this purpose by using the <filename>distutils.cfg</filename> file in <filename>$Python27\\Lib\\distutils</filename>, which can contain:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:307(para)
msgid "As a last step for setting up MinGW, make sure that the MinGW binaries' directories are set up in PATH."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:312(title)
msgid "Install Nova-compute"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:314(title)
msgid "Download the nova code"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:317(para)
msgid "Use Git to download the necessary source code. The installer to run Git on Windows can be downloaded here:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:324(para)
msgid "Download the installer. Once the download is complete, run the installer and follow the prompts in the installation wizard. The default should be acceptable for the needs of the document."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:333(para)
msgid "Run the following to clone the Nova code."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:339(title)
msgid "Install nova-compute service"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:340(para)
msgid "To install <systemitem class=\"service\">Nova-compute</systemitem>, run:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:348(title)
msgid "Configure nova-compute"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:349(para)
msgid "The <filename>nova.conf</filename> file must be placed in <filename>C:\\etc\\nova</filename> for running OpenStack on Hyper-V. Below is a sample <filename>nova.conf</filename> for Windows:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:389(replaceable)
msgid "IP_ADDRESS:35357"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:401(para)
msgid "<xref linkend=\"config_table_nova_hyperv\"/> contains a reference of all options for hyper-v."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:405(title)
msgid "Prepare images for use with Hyper-V"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:406(para)
msgid "Hyper-V currently supports only the VHD and VHDX file format for virtual machine instances. Detailed instructions for installing virtual machines on Hyper-V can be found here:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:410(link)
msgid "http://technet.microsoft.com/en-us/library/cc772480.aspx"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:411(para)
msgid "Once you have successfully created a virtual machine, you can then upload the image to glance using the native glance-client:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:413(replaceable)
msgid "VM_IMAGE_NAME"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:415(para)
msgid "VHD and VHDX files sizes can be bigger than their maximum internal size, as such you need to boot instances using a flavor with a slightly bigger disk size than the internal size of the disk file. To create VHDs, use the following PowerShell cmdlet:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:417(replaceable)
msgid "DISK_NAME.vhd"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:417(replaceable)
msgid "VHD_SIZE"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:421(title)
msgid "Run Compute with Hyper-V"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:422(para)
msgid "To start the <systemitem class=\"service\">nova-compute</systemitem> service, run this command from a console in the Windows server:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:428(title)
msgid "Troubleshoot Hyper-V configuration"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:432(para)
msgid "I ran the <placeholder-1/> command from my controller; however, I'm not seeing smiley faces for Hyper-V compute nodes, what do I do?"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:436(emphasis)
msgid "Verify that you are synchronized with a network time source. For instructions about how to configure NTP on your Hyper-V compute node, see <xref linkend=\"configure-ntp-windows\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:443(para)
msgid "How do I restart the compute service?"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml:450(para)
msgid "How do I restart the iSCSI initiator service?"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_baremetal.xml:7(title)
msgid "Baremetal driver"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_baremetal.xml:8(para)
msgid "The baremetal driver is a hypervisor driver for OpenStack Nova Compute. Within the OpenStack framework, it has the same role as the drivers for other hypervisors (libvirt, xen, etc), and yet it is presently unique in that the hardware is not virtualized - there is no hypervisor between the tenants and the physical hardware. It exposes hardware through the OpenStack APIs, using pluggable sub-drivers to deliver machine imaging (PXE) and power control (IPMI). With this, provisioning and management of physical hardware is accomplished by using common cloud APIs and tools, such as the Orchestration module (heat) or salt-cloud. However, due to this unique situation, using the baremetal driver requires some additional preparation of its environment, the details of which are beyond the scope of this guide."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_baremetal.xml:21(para)
msgid "Some OpenStack Compute features are not implemented by the baremetal hypervisor driver. See the <link href=\"http://wiki.openstack.org/HypervisorSupportMatrix\"> hypervisor support matrix</link> for details."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_baremetal.xml:25(para)
msgid "For the Baremetal driver to be loaded and function properly, ensure that the following options are set in <filename>/etc/nova/nova.conf</filename> on your <systemitem class=\"service\">nova-compute</systemitem> hosts."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_baremetal.xml:35(para)
msgid "Many configuration options are specific to the Baremetal driver. Also, some additional steps are required, such as building the baremetal deploy ramdisk. See the <link href=\"https://wiki.openstack.org/wiki/Baremetal\">main wiki page</link> for details and implementation suggestions."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_baremetal.xml:41(para)
msgid "To customize the Baremetal driver, use the configuration option settings documented in <xref linkend=\"config_table_nova_baremetal\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_lxc.xml:7(title)
msgid "LXC (Linux containers)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_lxc.xml:8(para)
msgid "LXC (also known as Linux containers) is a virtualization technology that works at the operating system level. This is different from hardware virtualization, the approach used by other hypervisors such as KVM, Xen, and VMware. LXC (as currently implemented using libvirt in the Compute service) is not a secure virtualization technology for multi-tenant environments (specifically, containers may affect resource quotas for other containers hosted on the same machine). Additional containment technologies, such as AppArmor, may be used to provide better isolation between containers, although this is not the case by default. For all these reasons, the choice of this virtualization technology is not recommended in production."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_lxc.xml:16(para)
msgid "If your compute hosts do not have hardware support for virtualization, LXC will likely provide better performance than QEMU. In addition, if your guests must access specialized hardware, such as GPUs, this might be easier to achieve with LXC than other hypervisors."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_lxc.xml:19(para)
msgid "Some OpenStack Compute features might be missing when running with LXC as the hypervisor. See the <link href=\"http://wiki.openstack.org/HypervisorSupportMatrix\">hypervisor support matrix</link> for details."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_lxc.xml:22(para)
msgid "To enable LXC, ensure the following options are set in <filename>/etc/nova/nova.conf</filename> on all hosts running the <systemitem class=\"service\">nova-compute</systemitem> service.<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_lxc.xml:29(para)
msgid "On Ubuntu, enable LXC support in OpenStack by installing the <literal>nova-compute-lxc</literal> package."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:122(None)
msgid "@@image: '../../common/figures/xenserver_architecture.png'; md5=99792432daf7f0302672fb8f03cb63bb"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:8(title)
msgid "Xen, XAPI, XenServer"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:9(para)
msgid "This section describes XAPI managed hypervisors, and how to use them with OpenStack."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:16(title)
msgid "Xen"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:17(para)
msgid "A hypervisor that provides the fundamental isolation between virtual machines. Xen is open source (GPLv2) and is managed by Xen.org, a cross-industry organization and a Linux Foundation Collaborative project."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:23(para)
msgid "Xen is a component of many different products and projects. The hypervisor itself is very similar across all these projects, but the way that it is managed can be different, which can cause confusion if you're not clear which toolstack you are using. Make sure you know what toolstack you want before you get started."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:33(title)
msgid "XAPI"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:34(para)
msgid "XAPI is one of the toolstacks that could control a Xen based hypervisor. XAPI's role is similar to libvirt's in the KVM world. The API provided by XAPI is called XenAPI. To learn more about the provided interface, look at <link href=\"http://docs.vmd.citrix.com/XenServer/6.2.0/1.0/en_gb/sdk.html#object_model_overview\"> XenAPI Object Model Overview </link> for definitions of XAPI specific terms such as SR, VDI, VIF and PIF."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:46(para)
msgid "OpenStack has a compute driver which talks to XAPI, therefore all XAPI managed servers could be used with OpenStack."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:52(title)
msgid "XenAPI"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:53(para)
msgid "XenAPI is the API provided by XAPI. This name is also used by the python library that is a client for XAPI."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:60(para)
msgid "An Open Source virtualization software which includes the Xen hypervisor and XAPI for the management. For more information and product downloads, visit <link href=\"http://xenserver.org/\"> xenserver.org </link>."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:71(title)
msgid "Privileged and unprivileged domains"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:72(para)
msgid "A Xen host runs a number of virtual machines, VMs, or domains (the terms are synonymous on Xen). One of these is in charge of running the rest of the system, and is known as domain 0, or dom0. It is the first domain to boot after Xen, and owns the storage and networking hardware, the device drivers, and the primary control software. Any other VM is unprivileged, and is known as a domU or guest. All customer VMs are unprivileged, but you should note that on Xen, the OpenStack Compute service (<systemitem class=\"service\">nova-compute</systemitem>) also runs in a domU. This gives a level of security isolation between the privileged system software and the OpenStack software (much of which is customer-facing). This architecture is described in more detail later."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:89(title)
msgid "Paravirtualized versus hardware virtualized domains"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:90(para)
msgid "A Xen virtual machine can be paravirtualized (PV) or hardware virtualized (HVM). This refers to the interaction between Xen, domain 0, and the guest VM's kernel. PV guests are aware of the fact that they are virtualized and will co-operate with Xen and domain 0; this gives them better performance characteristics. HVM guests are not aware of their environment, and the hardware has to pretend that they are running on an unvirtualized machine. HVM guests do not need to modify the guest operating system, which is essential when running Windows."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:102(para)
msgid "In OpenStack, customer VMs may run in either PV or HVM mode. However, the OpenStack domU (that's the one running <systemitem class=\"service\">nova-compute</systemitem>) must be running in PV mode."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:111(title)
msgid "XenAPI deployment architecture"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:114(para)
msgid "A basic OpenStack deployment on a XAPI-managed server, assuming that the network provider is nova-network, looks like this: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:131(para)
msgid "The hypervisor: Xen"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:136(para)
msgid "Domain 0: runs XAPI and some small pieces from OpenStack, the XAPI plug-ins."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:142(para)
msgid "OpenStack VM: The <systemitem class=\"service\">Compute</systemitem> service runs in a paravirtualized virtual machine, on the host under management. Each host runs a local instance of <systemitem class=\"service\">Compute</systemitem>. It is also running an instance of nova-network."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:153(para)
msgid "OpenStack Compute uses the XenAPI Python library to talk to XAPI, and it uses the Management Network to reach from the OpenStack VM to Domain 0."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:127(para)
msgid "Key things to note: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:165(para)
msgid "The above diagram assumes FlatDHCP networking."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:174(para)
msgid "Management network: RabbitMQ, MySQL, inter-host communication, and compute-XAPI communication. Please note that the VM images are downloaded by the XenAPI plug-ins, so make sure that the OpenStack Image Service is accessible through this network. It usually means binding those services to the management interface."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:186(para)
msgid "Tenant network: controlled by nova-network, this is used for tenant traffic."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:193(para)
msgid "Public network: floating IPs, public API endpoints."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:170(para)
msgid "There are three main OpenStack networks: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:202(para)
msgid "The networks shown here must be connected to the corresponding physical networks within the data center. In the simplest case, three individual physical network cards could be used. It is also possible to use VLANs to separate these networks. Please note, that the selected configuration must be in line with the networking model selected for the cloud. (In case of VLAN networking, the physical channels have to be able to forward the tagged traffic.)"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:161(para)
msgid "Some notes on the networking: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:218(title)
msgid "Further reading"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:224(para)
msgid "Citrix XenServer official documentation: <link href=\"http://docs.vmd.citrix.com/XenServer/6.2.0/1.0/en_gb/\"> http://docs.vmd.citrix.com/XenServer </link>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:233(para)
msgid "What is Xen? by Xen.org: <link href=\"http://xen.org/files/Marketing/WhatisXen.pdf\"> http://xen.org/files/Marketing/WhatisXen.pdf </link>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:242(para)
msgid "Xen Hypervisor project: <link href=\"http://www.xenproject.org/developers/teams/hypervisor.html\"> http://www.xenproject.org/developers/teams/hypervisor.html </link>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:251(para)
msgid "Xapi project: <link href=\"http://www.xenproject.org/developers/teams/xapi.html\"> http://www.xenproject.org/developers/teams/xapi.html </link>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:260(para)
msgid "Further XenServer and OpenStack information: <link href=\"http://wiki.openstack.org/XenServer\"> http://wiki.openstack.org/XenServer </link>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml:219(para)
msgid "Here are some of the resources available to learn more about Xen: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:6(title)
msgid "Cells"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:8(para)
msgid "<emphasis role=\"italic\">Cells</emphasis> functionality enables you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. It supports very large deployments."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:13(para)
msgid "When this functionality is enabled, the hosts in an OpenStack Compute cloud are partitioned into groups called cells. Cells are configured as a tree. The top-level cell should have a host that runs a <systemitem class=\"service\">nova-api</systemitem> service, but no <systemitem class=\"service\">nova-compute</systemitem> services. Each child cell should run all of the typical <systemitem class=\"service\">nova-*</systemitem> services in a regular Compute cloud except for <systemitem class=\"service\">nova-api</systemitem>. You can think of cells as a normal Compute deployment in that each cell has its own database server and message queue broker."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:25(para)
msgid "The <systemitem class=\"service\">nova-cells</systemitem> service handles communication between cells and selects cells for new instances. This service is required for every cell. Communication between cells is pluggable, and currently the only option is communication through RPC."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:30(para)
msgid "Cells scheduling is separate from host scheduling. <systemitem class=\"service\">nova-cells</systemitem> first picks a cell. Once a cell is selected and the new build request reaches its <systemitem class=\"service\">nova-cells</systemitem> service, it is sent over to the host scheduler in that cell and the build proceeds as it would have without cells."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:38(para)
msgid "Cell functionality is currently considered experimental."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:42(title)
msgid "Cell configuration options"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:49(option)
msgid "enable"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:51(para)
msgid "Set to <literal>True</literal> to turn on cell functionality. Default is <literal>false</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:57(option)
msgid "name"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:59(para)
msgid "Name of the current cell. Must be unique for each cell."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:64(option)
msgid "capabilities"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:66(para)
msgid "List of arbitrary <literal><replaceable>key</replaceable>=<replaceable>value</replaceable></literal> pairs defining capabilities of the current cell. Values include <literal>hypervisor=xenserver;kvm,os=linux;windows</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:74(option)
msgid "call_timeout"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:76(para)
msgid "How long in seconds to wait for replies from calls between cells."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:81(option) ./doc/config-reference/compute/section_compute-cells.xml:208(option)
msgid "scheduler_filter_classes"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:83(para)
msgid "Filter classes that the cells scheduler should use. By default, uses \"<literal>nova.cells.filters.all_filters</literal>\" to map to all cells filters included with Compute."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:91(option) ./doc/config-reference/compute/section_compute-cells.xml:219(option) ./doc/config-reference/compute/section_compute-scheduler.xml:799(literal) ./doc/config-reference/compute/section_compute-scheduler.xml:913(literal)
msgid "scheduler_weight_classes"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:93(para)
msgid "Weight classes that the scheduler for cells uses. By default, uses <literal>nova.cells.weights.all_weighers</literal> to map to all cells weight algorithms included with Compute."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:100(option) ./doc/config-reference/compute/section_compute-scheduler.xml:777(literal) ./doc/config-reference/compute/section_compute-scheduler.xml:780(option) ./doc/config-reference/compute/section_compute-scheduler.xml:903(literal) ./doc/config-reference/compute/section_compute-scheduler.xml:906(option)
msgid "ram_weight_multiplier"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:102(para)
msgid "Multiplier used to weight RAM. Negative numbers indicate that Compute should stack VMs on one host instead of spreading out new VMs to more hosts in the cell. The default value is 10.0."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:43(para)
msgid "Cells are disabled by default. All cell-related configuration options appear in the <literal>[cells]</literal> section in <filename>nova.conf</filename>. The following cell-related options are currently supported:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:112(title)
msgid "Configure the API (top-level) cell"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:113(para)
msgid "The compute API class must be changed in the API cell so that requests can be proxied through nova-cells down to the correct cell properly. Add the following line to <filename>nova.conf</filename> in the API cell:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:126(title)
msgid "Configure the child cells"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:137(replaceable)
msgid "cell1"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:127(para)
msgid "Add the following lines to <filename>nova.conf</filename> in the child cells, replacing <replaceable>cell1</replaceable> with the name of each cell:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:140(title)
msgid "Configure the database in each cell"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:141(para)
msgid "Before bringing the services online, the database in each cell needs to be configured with information about related cells. In particular, the API cell needs to know about its immediate children, and the child cells must know about their immediate agents. The information needed is the <application>RabbitMQ</application> server credentials for the particular cell."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:148(para)
msgid "Use the <placeholder-1/> command to add this information to the database in each cell:<placeholder-2/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:169(para)
msgid "As an example, assume an API cell named <literal>api</literal> and a child cell named <literal>cell1</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:172(para)
msgid "Within the <literal>api</literal> cell, specify the following RabbitMQ server information:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:179(para)
msgid "Within the <literal>cell1</literal> child cell, specify the following RabbitMQ server information:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:186(para)
msgid "You can run this in the API cell as root:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:190(para)
msgid "Repeat the previous steps for all child cells."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:191(para)
msgid "In the child cell, run the following, as root:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:195(para)
msgid "To customize the Compute cells, use the configuration option settings documented in <xref linkend=\"config_table_nova_cells\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:200(title)
msgid "Cell scheduling configuration"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:201(para)
msgid "To determine the best cell to use to launch a new instance, Compute uses a set of filters and weights defined in the <filename>/etc/nova/nova.conf</filename> file. The following options are available to prioritize cells for scheduling:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:210(para)
msgid "List of filter classes. By default <option>nova.cells.filters.all_filters</option> is specified, which maps to all cells filters included with Compute (see <xref linkend=\"scheduler-filters\"/>)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:221(para)
msgid "List of weight classes. By default <option>nova.cells.weights.all_weighers</option> is specified, which maps to all cell weight algorithms included with Compute. The following modules are available:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:229(para)
msgid "<literal>mute_child</literal>. Downgrades the likelihood of child cells being chosen for scheduling requests, which haven't sent capacity or capability updates in a while. Options include <option>mute_weight_multiplier</option> (multiplier for mute children; value should be negative) and <option>mute_weight_value</option> (assigned to mute children; should be a positive value)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:243(para)
msgid "<literal>ram_by_instance_type</literal>. Select cells with the most RAM capacity for the instance type being requested. Because higher weights win, Compute returns the number of available units for the instance type requested. The <option>ram_weight_multiplier</option> option defaults to 10.0 that adds to the weight by a factor of 10. Use a negative number to stack VMs on one host instead of spreading out new VMs to more hosts in the cell."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:258(para)
msgid "<literal>weight_offset</literal>. Allows modifying the database to weight a particular cell. You can use this when you want to disable a cell (for example, '0'), or to set a default cell by making its weight_offset very high (for example, '999999999999999'). The highest weight will be the first cell to be scheduled for launching an instance."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:273(para)
msgid "Additionally, the following options are available for the cell scheduler:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:277(option)
msgid "scheduler_retries"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:279(para)
msgid "Specifies how many times the scheduler tries to launch a new instance when no cells are available (default=10)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:285(option)
msgid "scheduler_retry_delay"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:287(para)
msgid "Specifies the delay (in seconds) between retries (default=2)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:292(para)
msgid "As an admin user, you can also add a filter that directs builds to a particular cell. The <filename>policy.json</filename> file must have a line with <literal>\"cells_scheduler_filter:TargetCellFilter\" : \"is_admin:True\"</literal> to let an admin user specify a scheduler hint to direct a build to a particular cell."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:301(title)
msgid "Optional cell configuration"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:302(para)
msgid "Cells store all inter-cell communication data, including user names and passwords, in the database. Because the cells data is not updated very frequently, use the <option>[cells]cells_config</option> option to specify a JSON file to store cells data. With this configuration, the database is no longer consulted when reloading the cells data. The file must have columns present in the Cell model (excluding common database fields and the <option>id</option> column). You must specify the queue connection information through a <option>transport_url</option> field, instead of <option>username</option>, <option>password</option>, and so on. The <option>transport_url</option> has the following form:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:316(replaceable)
msgid "HOSTNAME"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:316(replaceable)
msgid "VIRTUAL_HOST"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml:317(para)
msgid "The scheme can be either <literal>qpid</literal> or <literal>rabbit</literal>, as shown previously. The following sample shows this optional configuration:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:8(title)
msgid "QEMU"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:9(para)
msgid "From the perspective of the Compute service, the QEMU hypervisor is very similar to the KVM hypervisor. Both are controlled through libvirt, both support the same feature set, and all virtual machine images that are compatible with KVM are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has worse performance than KVM and is a poor choice for a production deployment."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:16(para)
msgid "Running on older hardware that lacks virtualization support."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:20(para)
msgid "Running the Compute service inside of a virtual machine for development or testing purposes, where the hypervisor does not support native virtualization for guests."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:14(para)
msgid "The typical uses cases for QEMU are<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:26(para)
msgid "To enable QEMU, add these settings to <filename>nova.conf</filename>:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:31(para)
msgid "For some operations you may also have to install the <placeholder-1/> utility:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:33(para)
msgid "On Ubuntu: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:36(para)
msgid "On Red Hat Enterprise Linux, Fedora, or CentOS: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:39(para)
msgid "On openSUSE: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:42(para)
msgid "The QEMU hypervisor supports the following virtual machine image formats:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:45(para) ./doc/config-reference/compute/section_hypervisor_kvm.xml:27(para)
msgid "Raw"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:48(para) ./doc/config-reference/compute/section_hypervisor_kvm.xml:30(para)
msgid "QEMU Copy-on-write (qcow2)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:51(para) ./doc/config-reference/compute/section_hypervisor_kvm.xml:36(para)
msgid "VMware virtual machine disk format (vmdk)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:55(title)
msgid "Tips and fixes for QEMU on RHEL"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:56(para)
msgid "If you are testing OpenStack in a virtual machine, you must configure Compute to use qemu without KVM and hardware virtualization. The second command relaxes SELinux rules to allow this mode of operation (<link href=\"https://bugzilla.redhat.com/show_bug.cgi?id=753589\"> https://bugzilla.redhat.com/show_bug.cgi?id=753589</link>). The last two commands here work around a libvirt issue fixed in Red Hat Enterprise Linux 6.4. Nested virtualization will be the much slower TCG variety, and you should provide lots of memory to the top-level guest, because the OpenStack-created guests default to 2GM RAM with no overcommit."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml:65(para)
msgid "The second command, <placeholder-1/>, may take a while."
msgstr ""
#: ./doc/config-reference/compute/section_xapi-resize-setup.xml:8(title)
msgid "Modify dom0 for resize/migration support"
msgstr ""
#: ./doc/config-reference/compute/section_xapi-resize-setup.xml:9(para)
msgid "To resize servers with XenServer you must:"
msgstr ""
#: ./doc/config-reference/compute/section_xapi-resize-setup.xml:12(para)
msgid "Establish a root trust between all hypervisor nodes of your deployment:"
msgstr ""
#: ./doc/config-reference/compute/section_xapi-resize-setup.xml:16(para)
msgid "To do so, generate an ssh key-pair with the <placeholder-1/> command. Ensure that each of your dom0's <filename>authorized_keys</filename> file (located in <filename>/root/.ssh/authorized_keys</filename>) contains the public key fingerprint (located in <filename>/root/.ssh/id_rsa.pub</filename>)."
msgstr ""
#: ./doc/config-reference/compute/section_xapi-resize-setup.xml:26(para)
msgid "Provide a <filename>/images</filename> mount point to the dom0 for your hypervisor:"
msgstr ""
#: ./doc/config-reference/compute/section_xapi-resize-setup.xml:30(para)
msgid "dom0 space is at a premium so creating a directory in dom0 is potentially dangerous and likely to fail especially when you resize large servers. The least you can do is to symlink <filename>/images</filename> to your local storage SR. The following instructions work for an English-based installation of XenServer and in the case of ext3-based SR (with which the resize functionality is known to work correctly)."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:42(None)
msgid "@@image: '../../common/figures/vmware-nova-driver-architecture.jpg'; md5=d95084ce963cffbe3e86307c87d804c1"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:11(title)
msgid "VMware vSphere"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:14(title)
msgid "Introduction"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:15(para)
msgid "OpenStack Compute supports the VMware vSphere product family and enables access to advanced features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS)."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:18(para)
msgid "This section describes how to configure VMware-based virtual machine images for launch. vSphere versions 4.1 and later are supported."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:21(para)
msgid "The VMware vCenter driver enables the <systemitem class=\"service\">nova-compute</systemitem> service to communicate with a VMware vCenter server that manages one or more ESX host clusters. The driver aggregates the ESX hosts in each cluster to present one large hypervisor entity for each cluster to the Compute scheduler. Because individual ESX hosts are not exposed to the scheduler, Compute schedules to the granularity of clusters and vCenter uses DRS to select the actual ESX host within the cluster. When a virtual machine makes its way into a vCenter cluster, it can use all vSphere features."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:31(para)
msgid "The following sections describe how to configure the VMware vCenter driver."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:35(title)
msgid "High-level architecture"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:36(para)
msgid "The following diagram shows a high-level view of the VMware driver architecture:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:39(title)
msgid "VMware driver architecture"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:46(para)
msgid "As the figure shows, the OpenStack Compute Scheduler sees three hypervisors that each correspond to a cluster in vCenter. <systemitem class=\"service\">Nova-compute</systemitem> contains the VMware driver. You can run with multiple <systemitem class=\"service\">nova-compute</systemitem> services. While Compute schedules at the granularity of a cluster, the VMware driver inside <systemitem class=\"service\">nova-compute</systemitem> interacts with the vCenter APIs to select an appropriate ESX host within the cluster. Internally, vCenter uses DRS for placement."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:54(para)
msgid "The VMware vCenter driver also interacts with the OpenStack Image Service to copy VMDK images from the Image Service back end store. The dotted line in the figure represents VMDK images being copied from the OpenStack Image Service to the vSphere data store. VMDK images are cached in the data store so the copy operation is only required the first time that the VMDK image is used."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:61(para)
msgid "After OpenStack boots a VM into a vSphere cluster, the VM becomes visible in vCenter and can access vSphere advanced features. At the same time, the VM is visible in the OpenStack dashboard and you can manage it as you would any other OpenStack VM. You can perform advanced vSphere operations in vCenter while you configure OpenStack resources such as VMs through the OpenStack dashboard."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:68(para)
msgid "The figure does not show how networking fits into the architecture. Both <systemitem class=\"service\">nova-network</systemitem> and the OpenStack Networking Service are supported. For details, see <xref linkend=\"VMware_networking\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:74(title)
msgid "Configuration overview"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:75(para)
msgid "To get started with the VMware vCenter driver, complete the following high-level steps:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:79(para)
msgid "Configure vCenter. See <xref linkend=\"vmware-prereqs\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:82(para)
msgid "Configure the VMware vCenter driver in the <filename>nova.conf</filename> file. See <xref linkend=\"VMwareVCDriver_details\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:86(para)
msgid "Load desired VMDK images into the OpenStack Image Service. See <xref linkend=\"VMware_images\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:90(para)
msgid "Configure networking with either <systemitem class=\"service\">nova-network</systemitem> or the OpenStack Networking Service. See <xref linkend=\"VMware_networking\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:96(title)
msgid "Prerequisites and limitations"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:97(para)
msgid "Use the following list to prepare a vSphere environment that runs with the VMware vCenter driver:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:101(para)
msgid "<emphasis role=\"bold\">Copying VMDK files (vSphere 5.1 only).</emphasis> In vSphere 5.1, copying large image files (for example, 12GB and greater) from Glance can take a long time. To improve performance, VMware recommends that you upgrade to VMware vCenter Server 5.1 Update 1 or later. For more information, see the Release Notes."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:109(para)
msgid "<emphasis role=\"bold\">DRS</emphasis>. For any cluster that contains multiple ESX hosts, enable DRS and enable fully automated placement."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:114(para)
msgid "<emphasis role=\"bold\">Shared storage</emphasis>. Only shared storage is supported and data stores must be shared among all hosts in a cluster. It is recommended to remove data stores not intended for OpenStack from clusters being configured for OpenStack."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:121(para)
msgid "<emphasis role=\"bold\">Clusters and data stores</emphasis>. Do not use OpenStack clusters and data stores for other purposes. If you do, OpenStack displays incorrect usage information."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:127(para)
msgid "<emphasis role=\"bold\">Networking</emphasis>. The networking configuration depends on the desired networking model. See <xref linkend=\"VMware_networking\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:132(para)
msgid "<emphasis role=\"bold\">Security groups</emphasis>. If you use the VMware driver with OpenStack Networking and the NSX plug-in, security groups are supported. If you use <systemitem class=\"service\">nova-network</systemitem>, security groups are not supported."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:137(para)
msgid "The NSX plug-in is the only plug-in that is validated for vSphere."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:141(para)
msgid "<emphasis role=\"bold\">VNC</emphasis>. The port range 5900 - 6105 (inclusive) is automatically enabled for VNC connections on every ESX host in all clusters under OpenStack control. For more information about using a VNC client to connect to virtual machine, see <link href=\"http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=1246\">http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=1246</link>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:146(para)
msgid "In addition to the default VNC port numbers (5900 to 6000) specified in the above document, the following ports are also used: 6101, 6102, and 6105."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:149(para)
msgid "You must modify the ESXi firewall configuration to allow the VNC ports. Additionally, for the firewall modifications to persist after a reboot, you must create a custom vSphere Installation Bundle (VIB) which is then installed onto the running ESXi host or added to a custom image profile used to install ESXi hosts. For details about how to create a VIB for persisting the firewall configuration modifications, see <link href=\"http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=2007381\"> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=2007381</link>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:158(para)
msgid "The VIB can be downloaded from <link href=\"https://github.com/openstack-vmwareapi-team/Tools\"> https://github.com/openstack-vmwareapi-team/Tools</link>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:163(para)
msgid "<emphasis role=\"bold\">Ephemeral Disks</emphasis>. Ephemeral disks are not supported. A future major release will address this limitation."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:168(para)
msgid "Injection of SSH keys into compute instances hosted by vCenter is not currently supported."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:173(para)
msgid "To use multiple vCenter installations with OpenStack, each vCenter must be assigned to a separate availability zone. This is required as the OpenStack Block Storage VMDK driver does not currently work across multiple vCenter installations."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:182(title)
msgid "VMware vCenter service account"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:183(para)
msgid "OpenStack integration requires a vCenter service account with the following minimum permissions. Apply the permissions to the <systemitem>Datacenter</systemitem> root object, and select the <guibutton>Propagate to Child Objects</guibutton> option."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:187(caption)
msgid "vCenter permissions tree"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:194(td)
msgid "All Privileges"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:201(td)
msgid "Datastore"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:208(td)
msgid "Allocate space"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:214(td)
msgid "Browse datastore"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:220(td)
msgid "Low level file operation"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:226(td)
msgid "Remove file"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:231(td)
msgid "Folder"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:238(td)
msgid "Create folder"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:243(td)
msgid "Host"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:257(td)
msgid "Maintenance"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:269(td)
msgid "Storage partition configuration"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:273(td)
msgid "Network"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:280(td)
msgid "Assign network"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:285(td)
msgid "Resource"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:292(td)
msgid "Assign virtual machine to resource pool"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:298(td)
msgid "Migrate powered off virtual machine"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:304(td)
msgid "Migrate powered on virtual machine"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:309(td)
msgid "Virtual Machine"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:323(td)
msgid "Add existing disk"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:329(td)
msgid "Add new disk"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:335(td)
msgid "Add or remove device"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:341(td)
msgid "Advanced"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:347(td)
msgid "CPU count"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:353(td)
msgid "Disk change tracking"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:359(td)
msgid "Host USB device"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:365(td)
msgid "Memory"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:371(td)
msgid "Raw device"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:377(td)
msgid "Remove disk"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:383(td)
msgid "Rename"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:389(td)
msgid "Swapfile placement"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:394(td)
msgid "Interaction"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:401(td)
msgid "Configure CD media"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:407(td)
msgid "Power Off"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:413(td)
msgid "Power On"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:419(td)
msgid "Reset"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:425(td)
msgid "Suspend"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:430(td)
msgid "Inventory"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:437(td)
msgid "Create from existing"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:443(td)
msgid "Create new"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:449(td)
msgid "Move"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:455(td)
msgid "Remove"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:461(td)
msgid "Unregister"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:466(td)
msgid "Provisioning"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:473(td)
msgid "Clone virtual machine"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:479(td)
msgid "Customize"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:484(td)
msgid "Sessions"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:491(td)
msgid "Validate session"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:497(td)
msgid "View and stop sessions"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:502(td)
msgid "Snapshot management"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:509(td)
msgid "Create snapshot"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:515(td)
msgid "Remove snapshot"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:519(td)
msgid "vApp"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:526(td)
msgid "Export"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:532(td)
msgid "Import"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:539(title)
msgid "VMware vCenter driver"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:540(para)
msgid "Use the VMware vCenter driver (VMwareVCDriver) to connect OpenStack Compute with vCenter. This recommended configuration enables access through vCenter to advanced vSphere features like vMotion, High Availability, and Dynamic Resource Scheduling (DRS)."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:546(title)
msgid "VMwareVCDriver configuration options"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:547(para)
msgid "When you use the VMwareVCDriver (vCenter versions 5.1 and later) with OpenStack Compute, add the following VMware-specific configuration options to the <filename>nova.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:563(para)
msgid "vSphere vCenter versions 5.0 and earlier: You must specify the location of the WSDL files by adding the <code>wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl</code> setting to the above configuration. For more information, see <link linkend=\"VMware_additional_config\">vSphere 5.0 and earlier additional set up</link>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:572(para)
msgid "Clusters: The vCenter driver can support multiple clusters. To use more than one cluster, simply add multiple <option>cluster_name</option> lines in <filename>nova.conf</filename> with the appropriate cluster name. Clusters and data stores used by the vCenter driver should not contain any VMs other than those created by the driver."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:581(para)
msgid "Data stores: The <option>datastore_regex</option> setting specifies the data stores to use with Compute. For example, <option>datastore_regex=\"nas.*\"</option> selects all the data stores that have a name starting with \"nas\". If this line is omitted, Compute uses the first data store returned by the vSphere API. It is recommended not to use this field and instead remove data stores that are not intended for OpenStack."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:591(para)
msgid "Reserved host memory: The <option>reserved_host_memory_mb</option> option value is 512MB by default. However, VMware recommends that you set this option to 0MB because the vCenter driver reports the effective memory available to the virtual machines."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:600(para)
msgid "A <systemitem class=\"service\">nova-compute</systemitem> service can control one or more clusters containing multiple ESX hosts, making <systemitem class=\"service\">nova-compute</systemitem> a critical service from a high availability perspective. Because the host that runs <systemitem class=\"service\">nova-compute</systemitem> can fail while the vCenter and ESX still run, you must protect the <systemitem class=\"service\">nova-compute</systemitem> service against host failures."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:609(para)
msgid "Many <filename>nova.conf</filename> options are relevant to libvirt but do not apply to this driver."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:612(para)
msgid "You must complete additional configuration for environments that use vSphere 5.0 and earlier. See <xref linkend=\"VMware_additional_config\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:617(title)
msgid "Images with VMware vSphere"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:618(para)
msgid "The vCenter driver supports images in the VMDK format. Disks in this format can be obtained from VMware Fusion or from an ESX environment. It is also possible to convert other formats, such as qcow2, to the VMDK format using the <option>qemu-img</option> utility. After a VMDK disk is available, load it into the OpenStack Image Service. Then, you can use it with the VMware vCenter driver. The following sections provide additional details on the supported disks and the commands used for conversion and upload."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:628(title)
msgid "Supported image types"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:629(para)
msgid "Upload images to the OpenStack Image Service in VMDK format. The following VMDK disk types are supported:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:633(para)
msgid "<emphasis role=\"italic\">VMFS Flat Disks</emphasis> (includes thin, thick, zeroedthick, and eagerzeroedthick). Note that once a VMFS thin disk is exported from VMFS to a non-VMFS location, like the OpenStack Image Service, it becomes a preallocated flat disk. This impacts the transfer time from the OpenStack Image Service to the data store when the full preallocated flat disk, rather than the thin disk, must be transferred."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:643(para)
msgid "<emphasis role=\"italic\">Monolithic Sparse disks</emphasis>. Sparse disks get imported from the OpenStack Image Service into ESX as thin provisioned disks. Monolithic Sparse disks can be obtained from VMware Fusion or can be created by converting from other virtual disk formats using the <code>qemu-img</code> utility."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:652(para)
msgid "The following table shows the <option>vmware_disktype</option> property that applies to each of the supported VMDK disk types:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:656(caption)
msgid "OpenStack Image Service disk type settings"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:659(th)
msgid "vmware_disktype property"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:665(td)
msgid "sparse"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:667(para)
msgid "Monolithic Sparse"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:673(para)
msgid "VMFS flat, thin provisioned"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:677(td)
msgid "preallocated (default)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:679(para)
msgid "VMFS flat, thick/zeroedthick/eagerzeroedthick"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:685(para)
msgid "The <option>vmware_disktype</option> property is set when an image is loaded into the OpenStack Image Service. For example, the following command creates a Monolithic Sparse image by setting <option>vmware_disktype</option> to <literal>sparse</literal>:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:694(para)
msgid "Specifying <literal>thin</literal> does not provide any advantage over <literal>preallocated</literal> with the current version of the driver. Future versions might restore the thin properties of the disk after it is downloaded to a vSphere data store."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:701(title)
msgid "Convert and load images"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:702(para)
msgid "Using the <code>qemu-img</code> utility, disk images in several formats (such as, qcow2) can be converted to the VMDK format."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:705(para)
msgid "For example, the following command can be used to convert a <link href=\"http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img\">qcow2 Ubuntu Trusty cloud image</link>:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:710(para)
msgid "VMDK disks converted through <code>qemu-img</code> are <emphasis role=\"italic\">always</emphasis> monolithic sparse VMDK disks with an IDE adapter type. Using the previous example of the Ubuntu Trusty image after the <code>qemu-img</code> conversion, the command to upload the VMDK disk should be something like:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:721(para)
msgid "Note that the <option>vmware_disktype</option> is set to <emphasis role=\"italic\">sparse</emphasis> and the <code>vmware_adaptertype</code> is set to <emphasis role=\"italic\">ide</emphasis> in the previous command."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:724(para)
msgid "If the image did not come from the <code>qemu-img</code> utility, the <code>vmware_disktype</code> and <code>vmware_adaptertype</code> might be different. To determine the image adapter type from an image file, use the following command and look for the <option>ddb.adapterType=</option> line:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:731(para)
msgid "Assuming a preallocated disk type and an iSCSI lsiLogic adapter type, the following command uploads the VMDK disk:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:739(para)
msgid "Currently, OS boot VMDK disks with an IDE adapter type cannot be attached to a virtual SCSI controller and likewise disks with one of the SCSI adapter types (such as, busLogic, lsiLogic) cannot be attached to the IDE controller. Therefore, as the previous examples show, it is important to set the <option>vmware_adaptertype</option> property correctly. The default adapter type is lsiLogic, which is SCSI, so you can omit the <parameter>vmware_adaptertype</parameter> property if you are certain that the image adapter type is lsiLogic."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:751(title)
msgid "Tag VMware images"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:752(para)
msgid "In a mixed hypervisor environment, OpenStack Compute uses the <option>hypervisor_type</option> tag to match images to the correct hypervisor type. For VMware images, set the hypervisor type to <literal>vmware</literal>. Other valid hypervisor types include: <literal>xen</literal>, <literal>qemu</literal>, <literal>lxc</literal>, <literal>uml</literal>, and <literal>hyperv</literal>. Note that <literal>qemu</literal> is used for both QEMU and KVM hypervisor types."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:769(title)
msgid "Optimize images"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:770(para)
msgid "Monolithic Sparse disks are considerably faster to download but have the overhead of an additional conversion step. When imported into ESX, sparse disks get converted to VMFS flat thin provisioned disks. The download and conversion steps only affect the first launched instance that uses the sparse disk image. The converted disk image is cached, so subsequent instances that use this disk image can simply use the cached version."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:778(para)
msgid "To avoid the conversion step (at the cost of longer download times) consider converting sparse disks to thin provisioned or preallocated disks before loading them into the OpenStack Image Service."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:782(para)
msgid "Use one of the following tools to pre-convert sparse disks."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:786(emphasis)
msgid "vSphere CLI tools"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:789(para)
msgid "Sometimes called the remote CLI or rCLI."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:790(para)
msgid "Assuming that the sparse disk is made available on a data store accessible by an ESX host, the following command converts it to preallocated format:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:794(para)
msgid "Note that the vifs tool from the same CLI package can be used to upload the disk to be converted. The vifs tool can also be used to download the converted disk if necessary."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:800(emphasis)
msgid "vmkfstools directly on the ESX host"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:802(para)
msgid "If the SSH service is enabled on an ESX host, the sparse disk can be uploaded to the ESX data store through scp and the vmkfstools local to the ESX host can use used to perform the conversion. After you log in to the host through ssh, run this command:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:809(emphasis)
msgid "vmware-vdiskmanager"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:810(para)
msgid "<code>vmware-vdiskmanager</code> is a utility that comes bundled with VMware Fusion and VMware Workstation. The following example converts a sparse disk to preallocated format:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:817(para)
msgid "In the previous cases, the converted vmdk is actually a pair of files:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:819(para)
msgid "The descriptor file <emphasis role=\"italic\">converted.vmdk</emphasis>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:823(para)
msgid "The actual virtual disk data file <emphasis role=\"italic\">converted-flat.vmdk</emphasis>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:826(para)
msgid "The file to be uploaded to the OpenStack Image Service is <emphasis role=\"italic\">converted-flat.vmdk</emphasis>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:831(title)
msgid "Image handling"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:832(para)
msgid "The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. As a result, the vCenter OpenStack Compute driver must download the VMDK via HTTP from the OpenStack Image Service to a data store that is visible to the hypervisor. To optimize this process, the first time a VMDK file is used, it gets cached in the data store. Subsequent virtual machines that need the VMDK use the cached version and don't have to copy the file again from the OpenStack Image Service."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:841(para)
msgid "Even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared data store. To avoid this copy, boot the image in linked_clone mode. To learn how to enable this mode, see <xref linkend=\"VMware_config\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:845(para)
msgid "You can also use the <code>vmware_linked_clone</code> property in the OpenStack Image Service to override the linked_clone mode on a per-image basis."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:849(para)
msgid "You can automatically purge unused images after a specified period of time. To configure this action, set these options in the <literal>DEFAULT</literal> section in the <filename>nova.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:854(parameter)
msgid "remove_unused_base_images"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:856(para)
msgid "Set this parameter to <placeholder-1/> to specify that unused images should be removed after the duration specified in the <parameter>remove_unused_original_minimum_age_seconds</parameter> parameter. The default is <placeholder-2/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:862(parameter)
msgid "remove_unused_original_minimum_age_seconds"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:863(para)
msgid "Specifies the duration in seconds after which an unused image is purged from the cache. The default is <placeholder-1/> (24 hours)."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:871(title)
msgid "Networking with VMware vSphere"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:872(para)
msgid "The VMware driver supports networking with the <systemitem class=\"service\">nova-network</systemitem> service or the OpenStack Networking Service. Depending on your installation, complete these configuration steps before you provision VMs:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:878(para)
msgid "<emphasis role=\"bold\"> The <systemitem class=\"service\">nova-network</systemitem> service with the FlatManager or FlatDHCPManager</emphasis>. Create a port group with the same name as the <literal>flat_network_bridge</literal> value in the <filename>nova.conf</filename> file. The default value is <literal>br100</literal>. If you specify another value, the new value must be a valid Linux bridge identifier that adheres to Linux bridge naming conventions."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:887(para)
msgid "All VM NICs are attached to this port group."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:888(para)
msgid "Ensure that the flat interface of the node that runs the <systemitem class=\"service\">nova-network</systemitem> service has a path to this network."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:892(para)
msgid "When configuring the port binding for this port group in vCenter, specify <literal>ephemeral</literal> for the port binding type. For more information, see <link href=\"http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=1022312\">Choosing a port binding type in ESX/ESXi</link> in the VMware Knowledge Base."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:900(para)
msgid "<emphasis role=\"bold\">The <systemitem class=\"service\">nova-network</systemitem> service with the VlanManager</emphasis>. Set the <literal>vlan_interface</literal> configuration option to match the ESX host interface that handles VLAN-tagged VM traffic."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:905(para)
msgid "OpenStack Compute automatically creates the corresponding port groups."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:909(para)
msgid "If you are using the OpenStack Networking Service: Before provisioning VMs, create a port group with the same name as the <literal>vmware.integration_bridge</literal> value in <filename>nova.conf</filename> (default is <literal>br-int</literal>). All VM NICs are attached to this port group for management by the OpenStack Networking plug-in."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:920(title)
msgid "Volumes with VMware vSphere"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:921(para)
msgid "The VMware driver supports attaching volumes from the OpenStack Block Storage service. The VMware VMDK driver for OpenStack Block Storage is recommended and should be used for managing volumes based on vSphere data stores. For more information about the VMware VMDK driver, see <link href=\"http://docs.openstack.org/trunk/config-reference/content/vmware-vmdk-driver.html\">VMware VMDK Driver</link>. Also an iSCSI volume driver provides limited support and can be used only for attachments."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:930(title)
msgid "vSphere 5.0 and earlier additional set up"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:931(para)
msgid "Users of vSphere 5.0 or earlier must host their WSDL files locally. These steps are applicable for vCenter 5.0 or ESXi 5.0 and you can either mirror the WSDL from the vCenter or ESXi server that you intend to use or you can download the SDK directly from VMware. These workaround steps fix a <link href=\"http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&amp;externalId=2010507\">known issue</link> with the WSDL that was resolved in later versions."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:937(para)
msgid "When setting the VMwareVCDriver configuration options, you must include the <code>wsdl_location</code> option. For more information, see <link linkend=\"VMwareVCDriver_configuration_options\">VMwareVCDriver configuration options</link> above."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:943(title)
msgid "To mirror WSDL from vCenter (or ESXi)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:945(para)
msgid "Set the <code>VMWAREAPI_IP</code> shell variable to the IP address for your vCenter or ESXi host from where you plan to mirror files. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:951(para)
msgid "Create a local file system directory to hold the WSDL files:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:956(para)
msgid "Change into the new directory. <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:960(para)
msgid "Use your OS-specific tools to install a command-line tool that can download files like <placeholder-1/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:965(para)
msgid "Download the files to the local file cache:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:975(para)
msgid "Because the <filename>reflect-types.xsd</filename> and <filename>reflect-messagetypes.xsd</filename> files do not fetch properly, you must stub out these files. Use the following XML listing to replace the missing file content. The XML parser underneath Python can be very particular and if you put a space in the wrong place, it can break the parser. Copy the following contents and formatting carefully."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:992(para)
msgid "Now that the files are locally present, tell the driver to look for the SOAP service WSDLs in the local file system and not on the remote vSphere server. Add the following setting to the <filename>nova.conf</filename> file for your <systemitem class=\"service\">nova-compute</systemitem> node:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:1002(para)
msgid "Alternatively, download the version appropriate SDK from <link href=\"http://www.vmware.com/support/developer/vc-sdk/\">http://www.vmware.com/support/developer/vc-sdk/</link> and copy it to the <filename>/opt/stack/vmware</filename> file. Make sure that the WSDL is available, in for example <filename>/opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl</filename>. You must point <filename>nova.conf</filename> to fetch this WSDL file from the local file system by using a URL."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:1009(para)
msgid "When using the VMwareVCDriver (vCenter) with OpenStack Compute with vSphere version 5.0 or earlier, <filename>nova.conf</filename> must include the following extra config option:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:1017(title) ./doc/config-reference/compute/section_compute-scheduler.xml:943(title)
msgid "Configuration reference"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml:1018(para)
msgid "To customize the VMware driver, use the configuration option settings documented in <xref linkend=\"config_table_nova_vmware\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-options-reference.xml:7(title)
msgid "Compute sample configuration files"
msgstr ""
#: ./doc/config-reference/compute/section_compute-options-reference.xml:9(title)
msgid "nova.conf - configuration options"
msgstr ""
#: ./doc/config-reference/compute/section_compute-options-reference.xml:10(para)
msgid "For a complete list of all available configuration options for each OpenStack Compute service, run bin/nova-&lt;servicename&gt; --help."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:9(para)
msgid "KVM is configured as the default hypervisor for Compute."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:11(para)
msgid "This document contains several sections about hypervisor selection. If you are reading this document linearly, you do not want to load the KVM module before you install <systemitem class=\"service\">nova-compute</systemitem>. The <systemitem class=\"service\">nova-compute</systemitem> service depends on qemu-kvm, which installs <filename>/lib/udev/rules.d/45-qemu-kvm.rules</filename>, which sets the correct permissions on the /dev/kvm device node."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:18(para)
msgid "To enable KVM explicitly, add the following configuration options to the <filename>/etc/nova/nova.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:24(para)
msgid "The KVM hypervisor supports the following virtual machine image formats:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:33(para)
msgid "QED Qemu Enhanced Disk"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:39(para)
msgid "This section describes how to enable KVM on your system. For more information, see the following distribution-specific documentation:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:43(para)
msgid "<link href=\"http://fedoraproject.org/wiki/Getting_started_with_virtualization\">Fedora: Getting started with virtualization</link> from the Fedora project wiki."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:49(para)
msgid "<link href=\"https://help.ubuntu.com/community/KVM/Installation\">Ubuntu: KVM/Installation</link> from the Community Ubuntu documentation."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:53(para)
msgid "<link href=\"http://static.debian-handbook.info/browse/stable/sect.virtualization.html#idp11279352\">Debian: Virtualization with KVM</link> from the Debian handbook."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:58(para)
msgid "<link href=\"http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/sect-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Host_Installation-Installing_KVM_packages_on_an_existing_Red_Hat_Enterprise_Linux_system.html\">Red Hat Enterprise Linux: Installing virtualization packages on an existing Red Hat Enterprise Linux system</link> from the <citetitle>Red Hat Enterprise Linux Virtualization Host Configuration and Guest Installation Guide</citetitle>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:66(para)
msgid "<link href=\"http://doc.opensuse.org/documentation/html/openSUSE/opensuse-kvm/cha.kvm.requires.html#sec.kvm.requires.install\">openSUSE: Installing KVM</link> from the openSUSE Virtualization with KVM manual."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:72(para)
msgid "<link href=\"http://doc.opensuse.org/products/draft/SLES/SLES-kvm_sd_draft/cha.kvm.requires.html#sec.kvm.requires.install\">SLES: Installing KVM</link> from the SUSE Linux Enterprise Server Virtualization with KVM manual."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:82(title)
msgid "Specify the CPU model of KVM guests"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:83(para)
msgid "The Compute service enables you to control the guest CPU model that is exposed to KVM virtual machines. Use cases include:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:87(para)
msgid "To maximize performance of virtual machines by exposing new host CPU features to the guest"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:91(para)
msgid "To ensure a consistent default CPU across all machines, removing reliance of variable QEMU defaults"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:95(para)
msgid "In libvirt, the CPU is specified by providing a base CPU model name (which is a shorthand for a set of feature flags), a set of additional feature flags, and the topology (sockets/cores/threads). The libvirt KVM driver provides a number of standard CPU model names. These models are defined in the <filename>/usr/share/libvirt/cpu_map.xml</filename> file. Check this file to determine which models are supported by your local installation."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:101(para)
msgid "Two Compute configuration options in the <literal>[libvirt]</literal> group of <filename>nova.conf</filename> define which type of CPU model is exposed to the hypervisor when using KVM: <literal>cpu_mode</literal> and <literal>cpu_model</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:105(para)
msgid "The <literal>cpu_mode</literal> option can take one of the following values: <literal>none</literal>, <literal>host-passthrough</literal>, <literal>host-model</literal>, and <literal>custom</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:109(title)
msgid "Host model (default for KVM &amp; QEMU)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:110(para)
msgid "If your <filename>nova.conf</filename> file contains <literal>cpu_mode=host-model</literal>, libvirt identifies the CPU model in <filename>/usr/share/libvirt/cpu_map.xml</filename> file that most closely matches the host, and requests additional CPU flags to complete the match. This configuration provides the maximum functionality and performance and maintains good reliability and compatibility if the guest is migrated to another host with slightly different host CPUs."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:119(title)
msgid "Host pass through"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:120(para)
msgid "If your <filename>nova.conf</filename> file contains <literal>cpu_mode=host-passthrough</literal>, libvirt tells KVM to pass through the host CPU with no modifications. The difference to host-model, instead of just matching feature flags, every last detail of the host CPU is matched. This gives the best performance, and can be important to some apps which check low level CPU details, but it comes at a cost with respect to migration. The guest can only be migrated to a matching host CPU."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:129(title)
msgid "Custom"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:130(para)
msgid "If your <filename>nova.conf</filename> file contains <literal>cpu_mode=custom</literal>, you can explicitly specify one of the supported named models using the cpu_model configuration option. For example, to configure the KVM guests to expose Nehalem CPUs, your <filename>nova.conf</filename> file should contain:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:140(title)
msgid "None (default for all libvirt-driven hypervisors other than KVM &amp; QEMU)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:142(para)
msgid "If your <filename>nova.conf</filename> file contains <literal>cpu_mode=none</literal>, libvirt does not specify a CPU model. Instead, the hypervisor chooses the default model."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:148(title)
msgid "Guest agent support"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:149(para)
msgid "Use guest agents to enable optional access between compute nodes and guests through a socket, using the QMP protocol."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:151(para)
msgid "To enable this feature, you must set <literal>hw_qemu_guest_agent=yes</literal> as a metadata parameter on the image you wish to use to create the guest-agent-capable instances from. You can explicitly disable the feature by setting <literal>hw_qemu_guest_agent=no</literal> in the image metadata."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:157(title)
msgid "KVM performance tweaks"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:158(para)
msgid "The <link href=\"http://www.linux-kvm.org/page/VhostNet\">VHostNet</link> kernel module improves network performance. To load the kernel module, run the following command as root:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:164(title)
msgid "Troubleshoot KVM"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:165(para)
msgid "Trying to launch a new virtual machine instance fails with the <literal>ERROR</literal>state, and the following error appears in the <filename>/var/log/nova/nova-compute.log</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:169(para)
msgid "This message indicates that the KVM kernel modules were not loaded."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:170(para)
msgid "If you cannot start VMs after installation without rebooting, the permissions might not be set correctly. This can happen if you load the KVM module before you install <systemitem class=\"service\">nova-compute</systemitem>. To check whether the group is set to <systemitem>kvm</systemitem>, run:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml:175(para)
msgid "If it is not set to <systemitem>kvm</systemitem>, run:"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:7(title)
msgid "Compute log files"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:8(para)
msgid "The corresponding log file of each Compute service is stored in the <filename>/var/log/nova/</filename> directory of the host on which each service runs."
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:12(caption)
msgid "Log files used by Compute services"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:21(td)
msgid "Service name (CentOS/Fedora/openSUSE/Red Hat Enterprise Linux/SUSE Linux Enterprise)"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:25(td)
msgid "Service name (Ubuntu/Debian)"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:35(td)
msgid "openstack-nova-api"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:44(filename)
msgid "cert.log"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:46(para)
msgid "The X509 certificate service (<systemitem>openstack-nova-cert</systemitem>/<systemitem>nova-cert</systemitem>) is only required by the EC2 API to the Compute service."
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:52(systemitem)
msgid "openstack-nova-cert"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:55(systemitem)
msgid "nova-cert"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:60(filename)
msgid "compute.log"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:63(systemitem)
msgid "openstack-nova-compute"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:71(filename)
msgid "conductor.log"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:74(systemitem)
msgid "openstack-nova-conductor"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:77(systemitem)
msgid "nova-conductor"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:82(filename)
msgid "consoleauth.log"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:85(systemitem)
msgid "openstack-nova-consoleauth"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:88(systemitem)
msgid "nova-consoleauth"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:93(filename)
msgid "network.log"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:94(para)
msgid "The <systemitem>nova</systemitem> network service (<systemitem>openstack-nova-network</systemitem>/<systemitem>nova-network</systemitem>) only runs in deployments that are not configured to use the Networking service (<systemitem>neutron</systemitem>)."
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:101(systemitem)
msgid "openstack-nova-network"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:104(systemitem)
msgid "nova-network"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:109(filename)
msgid "nova-manage.log"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:112(systemitem) ./doc/config-reference/compute/section_nova-log-files.xml:115(systemitem)
msgid "nova-manage"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:123(systemitem)
msgid "openstack-nova-scheduler"
msgstr ""
#: ./doc/config-reference/compute/section_nova-log-files.xml:126(systemitem)
msgid "nova-scheduler"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/compute/section_compute-scheduler.xml:125(None)
msgid "@@image: '../../common/figures/filteringWorkflow1.png'; md5=c144af5cbdee1bd17a7bde0bea5b5fe7"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/compute/section_compute-scheduler.xml:753(None)
msgid "@@image: '../../common/figures/nova-weighting-hosts.png'; md5=000eab4cf0deb1da2e692e023065a6ae"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:12(title)
msgid "Scheduling"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:13(para)
msgid "Compute uses the <systemitem class=\"service\">nova-scheduler</systemitem> service to determine how to dispatch compute and volume requests. For example, the <systemitem class=\"service\">nova-scheduler</systemitem> service determines on which host a VM should launch. In the context of filters, the term <firstterm>host</firstterm> means a physical node that has a <systemitem class=\"service\">nova-compute</systemitem> service running on it. You can configure the scheduler through a variety of options."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:22(para)
msgid "Compute is configured with the following default scheduler options in the <filename>/etc/nova/nova.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:30(para)
msgid "By default, the <parameter>scheduler_driver</parameter> is configured as a filter scheduler, as described in the next section. In the default configuration, this scheduler considers hosts that meet all the following criteria:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:36(para)
msgid "Have not been attempted for scheduling purposes (<literal>RetryFilter</literal>)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:40(para)
msgid "Are in the requested availability zone (<literal>AvailabilityZoneFilter</literal>)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:44(para)
msgid "Have sufficient RAM available (<literal>RamFilter</literal>)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:48(para)
msgid "Can service the request (<literal>ComputeFilter</literal>)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:52(para)
msgid "Satisfy the extra specs associated with the instance type (<literal>ComputeCapabilitiesFilter</literal>)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:57(para)
msgid "Satisfy any architecture, hypervisor type, or virtual machine mode properties specified on the instance's image properties (<literal>ImagePropertiesFilter</literal>)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:63(para)
msgid "Are on a different host than other instances of a group (if requested) (<literal>ServerGroupAntiAffinityFilter</literal>)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:69(para)
msgid "Are in a set of group hosts (if requested) (<literal>ServerGroupAffinityFilter</literal>)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:74(para)
msgid "The scheduler caches its list of available hosts; use the <option>scheduler_driver_task_period</option> option to specify how often the list is updated."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:78(para)
msgid "Do not configure <option>service_down_time</option> to be much smaller than <option>scheduler_driver_task_period</option>; otherwise, hosts appear to be dead while the host list is being cached."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:84(para)
msgid "For information about the volume scheduler, see the Block Storage section of <link href=\"http://docs.openstack.org/admin-guide-cloud/content/managing-volumes.html\"><citetitle>OpenStack Cloud Administrator Guide</citetitle></link>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:89(para)
msgid "The scheduler chooses a new host when an instance is migrated."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:91(para)
msgid "When evacuating instances from a host, the scheduler service does not pick the next host. Instances are evacuated to the host explicitly defined by the administrator. For information about instance evacuation, see <link href=\"http://docs.openstack.org/admin-guide-cloud/content/nova_cli_evacuate.html\">Evacuate instances</link> section of the <citetitle>OpenStack Cloud Administrator Guide</citetitle>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:100(title)
msgid "Filter scheduler"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:101(para)
msgid "The filter scheduler (<literal>nova.scheduler.filter_scheduler.FilterScheduler</literal>) is the default scheduler for scheduling virtual machine instances. It supports filtering and weighting to make informed decisions on where a new instance should be created."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:110(title)
msgid "Filters"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:111(para)
msgid "When the filter scheduler receives a request for a resource, it first applies filters to determine which hosts are eligible for consideration when dispatching a resource. Filters are binary: either a host is accepted by the filter, or it is rejected. Hosts that are accepted by the filter are then processed by a different algorithm to decide which hosts to use for that request, described in the <link linkend=\"weights\">Weights</link> section."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:120(title)
msgid "Filtering"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:129(para)
msgid "The <option>scheduler_available_filters</option> configuration option in <filename>nova.conf</filename> provides the Compute service with the list of the filters that are used by the scheduler. The default setting specifies all of the filter that are included with the Compute service:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:136(para)
msgid "This configuration option can be specified multiple times. For example, if you implemented your own custom filter in Python called <literal>myfilter.MyFilter</literal> and you wanted to use both the built-in filters and your custom filter, your <filename>nova.conf</filename> file would contain:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:145(para)
msgid "The <literal>scheduler_default_filters</literal> configuration option in <filename>nova.conf</filename> defines the list of filters that are applied by the <systemitem class=\"service\">nova-scheduler</systemitem> service. The default filters are:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:152(para)
msgid "The following sections describe the available filters."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:155(title)
msgid "AggregateCoreFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:156(para)
msgid "Filters host by CPU core numbers with a per-aggregate <literal>cpu_allocation_ratio</literal> value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. For information about how to use this filter, see <xref linkend=\"host-aggregates\"/>. See also <xref linkend=\"corefilter\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:168(title)
msgid "AggregateDiskFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:169(para)
msgid "Filters host by disk allocation with a per-aggregate <literal>disk_allocation_ratio</literal> value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. For information about how to use this filter, see <xref linkend=\"host-aggregates\"/>. See also <xref linkend=\"diskfilter\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:181(title)
msgid "AggregateImagePropertiesIsolation"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:182(para)
msgid "Matches properties defined in an image's metadata against those of aggregates to determine host matches:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:187(para)
msgid "If a host belongs to an aggregate and the aggregate defines one or more metadata that matches an image's properties, that host is a candidate to boot the image's instance."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:193(para)
msgid "If a host does not belong to any aggregate, it can boot instances from all images."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:197(para)
msgid "For example, the following aggregate <systemitem>myWinAgg</systemitem> has the Windows operating system as metadata (named 'windows'):"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:206(para)
msgid "In this example, because the following Win-2012 image has the <property>windows</property> property, it boots on the <systemitem>sf-devel</systemitem> host (all other filters being equal):"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:219(para)
msgid "You can configure the <systemitem>AggregateImagePropertiesIsolation</systemitem> filter by using the following options in the <filename>nova.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:230(title)
msgid "AggregateInstanceExtraSpecsFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:231(para)
msgid "Matches properties defined in extra specs for an instance type against admin-defined properties on a host aggregate. Works with specifications that are scoped with <literal>aggregate_instance_extra_specs</literal>. For backward compatibility, also works with non-scoped specifications; this action is highly discouraged because it conflicts with <link linkend=\"computecapabilitiesfilter\"> ComputeCapabilitiesFilter</link> filter when you enable both filters. For information about how to use this filter, see the <link linkend=\"host-aggregates\">host aggregates</link> section."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:245(title)
msgid "AggregateIoOpsFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:246(para)
msgid "Filters host by disk allocation with a per-aggregate <literal>max_io_ops_per_host</literal> value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. For information about how to use this filter, see <xref linkend=\"host-aggregates\"/>. See also <xref linkend=\"ioopsfilter\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:258(title)
msgid "AggregateMultiTenancyIsolation"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:259(para)
msgid "Isolates tenants to specific <link linkend=\"host-aggregates\">host aggregates</link>. If a host is in an aggregate that has the <literal>filter_tenant_id</literal> metadata key, the host creates instances from only that tenant or list of tenants. A host can be in different aggregates. If a host does not belong to an aggregate with the metadata key, the host can create instances from all tenants."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:270(title)
msgid "AggregateNumInstancesFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:271(para)
msgid "Filters host by number of instances with a per-aggregate <literal>max_instances_per_host</literal> value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and thus more than one value is found, the minimum value will be used. For information about how to use this filter, see <xref linkend=\"host-aggregates\"/>. See also <xref linkend=\"numinstancesfilter\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:284(title)
msgid "AggregateRamFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:285(para)
msgid "Filters host by RAM allocation of instances with a per-aggregate <literal>ram_allocation_ratio</literal> value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and thus more than one value is found, the minimum value will be used. For information about how to use this filter, see <xref linkend=\"host-aggregates\"/>. See also <xref linkend=\"ramfilter\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:298(title)
msgid "AggregateTypeAffinityFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:299(para)
msgid "Filters host by per-aggregate <literal>instance_type</literal> value. For information about how to use this filter, see <xref linkend=\"host-aggregates\"/>. See also <xref linkend=\"typeaffinityfilter\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:308(title)
msgid "AllHostsFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:309(para)
msgid "This is a no-op filter. It does not eliminate any of the available hosts."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:313(title)
msgid "AvailabilityZoneFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:314(para)
msgid "Filters hosts by availability zone. You must enable this filter for the scheduler to respect availability zones in requests."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:319(title)
msgid "ComputeCapabilitiesFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:320(para)
msgid "Matches properties defined in extra specs for an instance type against compute capabilities."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:322(para)
msgid "If an extra specs key contains a colon (<literal>:</literal>), anything before the colon is treated as a namespace and anything after the colon is treated as the key to be matched. If a namespace is present and is not <literal>capabilities</literal>, the filter ignores the namespace. For backward compatibility, also treats the extra specs key as the key to be matched if no namespace is present; this action is highly discouraged because it conflicts with <link linkend=\"aggregate-instanceextraspecsfilter\"> AggregateInstanceExtraSpecsFilter</link> filter when you enable both filters."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:336(title)
msgid "ComputeFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:337(para)
msgid "Passes all hosts that are operational and enabled."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:339(para)
msgid "In general, you should always enable this filter."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:342(title)
msgid "CoreFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:343(para)
msgid "Only schedules instances on hosts if sufficient CPU cores are available. If this filter is not set, the scheduler might over-provision a host based on cores. For example, the virtual cores running on an instance may exceed the physical cores."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:348(para)
msgid "You can configure this filter to enable a fixed amount of vCPU overcommitment by using the <option>cpu_allocation_ratio</option> configuration option in <filename>nova.conf</filename>. The default setting is:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:355(para)
msgid "With this setting, if 8 vCPUs are on a node, the scheduler allows instances up to 128 vCPU to be run on that node."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:358(para)
msgid "To disallow vCPU overcommitment set:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:361(para)
msgid "The Compute API always returns the actual number of CPU cores available on a compute node regardless of the value of the <option>cpu_allocation_ratio</option> configuration key. As a result changes to the <option>cpu_allocation_ratio</option> are not reflected via the command line clients or the dashboard. Changes to this configuration key are only taken into account internally in the scheduler."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:374(title)
msgid "DifferentHostFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:375(para)
msgid "Schedules the instance on a different host from a set of instances. To take advantage of this filter, the requester must pass a scheduler hint, using <literal>different_host</literal> as the key and a list of instance UUIDs as the value. This filter is the opposite of the <literal>SameHostFilter</literal>. Using the <placeholder-1/> command-line tool, use the <literal>--hint</literal> flag. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:387(para)
msgid "With the API, use the <literal>os:scheduler_hints</literal> key. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:393(title)
msgid "DiskFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:394(para)
msgid "Only schedules instances on hosts if there is sufficient disk space available for root and ephemeral storage."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:397(para)
msgid "You can configure this filter to enable a fixed amount of disk overcommitment by using the <literal>disk_allocation_ratio</literal> configuration option in <filename>nova.conf</filename>. The default setting is:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:404(para)
msgid "Adjusting this value to greater than 1.0 enables scheduling instances while over committing disk resources on the node. This might be desirable if you use an image format that is sparse or copy on write so that each virtual instance does not require a 1:1 allocation of virtual disk to physical storage."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:412(title)
msgid "GroupAffinityFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:414(para)
msgid "This filter is deprecated in favor of <link linkend=\"servergroupantiaffinityfilter\">ServerGroupAffinityFilter</link>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:418(para)
msgid "The GroupAffinityFilter ensures that an instance is scheduled on to a host from a set of group hosts. To take advantage of this filter, the requester must pass a scheduler hint, using <literal>group</literal> as the key and an arbitrary name as the value. Using the <placeholder-1/> command-line tool, use the <literal>--hint</literal> flag. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:426(replaceable) ./doc/config-reference/compute/section_compute-scheduler.xml:447(replaceable) ./doc/config-reference/compute/section_compute-scheduler.xml:665(replaceable) ./doc/config-reference/compute/section_compute-scheduler.xml:680(replaceable)
msgid "IMAGE_ID"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:427(para)
msgid "This filter should not be enabled at the same time as <link linkend=\"groupantiaffinityfilter\">GroupAntiAffinityFilter</link> or neither filter will work properly."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:433(title)
msgid "GroupAntiAffinityFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:435(para)
msgid "This filter is deprecated in favor of <link linkend=\"servergroupantiaffinityfilter\">ServerGroupAntiAffinityFilter</link>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:439(para)
msgid "The GroupAntiAffinityFilter ensures that each instance in a group is on a different host. To take advantage of this filter, the requester must pass a scheduler hint, using <literal>group</literal> as the key and an arbitrary name as the value. Using the <placeholder-1/> command-line tool, use the <literal>--hint</literal> flag. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:448(para)
msgid "This filter should not be enabled at the same time as <link linkend=\"groupaffinityfilter\">GroupAffinityFilter</link> or neither filter will work properly."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:454(title)
msgid "ImagePropertiesFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:455(para)
msgid "Filters hosts based on properties defined on the instance's image. It passes hosts that can support the specified image properties contained in the instance. Properties include the architecture, hypervisor type, and virtual machine mode. for example, an instance might require a host that runs an ARM-based processor and QEMU as the hypervisor. An image can be decorated with these properties by using:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:464(para)
msgid "The image properties that the filter checks for are:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:468(para)
msgid "<literal>architecture</literal>: Architecture describes the machine architecture required by the image. Examples are i686, x86_64, arm, and ppc64."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:474(para)
msgid "<literal>hypervisor_type</literal>: Hypervisor type describes the hypervisor required by the image. Examples are xen, qemu, and xenapi. Note that qemu is used for both QEMU and KVM hypervisor types."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:481(para)
msgid "<literal>vm_mode</literal>: Virtual machine mode describes the hypervisor application binary interface (ABI) required by the image. Examples are 'xen' for Xen 3.0 paravirtual ABI, 'hvm' for native ABI, 'uml' for User Mode Linux paravirtual ABI, exe for container virt executable ABI."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:492(title)
msgid "IsolatedHostsFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:493(para)
msgid "Allows the admin to define a special (isolated) set of images and a special (isolated) set of hosts, such that the isolated images can only run on the isolated hosts, and the isolated hosts can only run isolated images. The flag <literal>restrict_isolated_hosts_to_isolated_images</literal> can be used to force isolated hosts to only run isolated images."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:501(para)
msgid "The admin must specify the isolated set of images and hosts in the <filename>nova.conf</filename> file using the <literal>isolated_hosts</literal> and <literal>isolated_images</literal> configuration options. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:510(title)
msgid "IoOpsFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:511(para)
msgid "The IoOpsFilter filters hosts by concurrent I/O operations on it. Hosts with too many concurrent I/O operations will be filtered out. The <option>max_io_ops_per_host</option> option specifies the maximum number of I/O intensive instances allowed to run on a host. A host will be ignored by the scheduler if more than <option>max_io_ops_per_host</option> instances in build, resize, snapshot, migrate, rescue or unshelve task states are running on it."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:524(title)
msgid "JsonFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:529(para)
msgid "="
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:532(para)
msgid "&lt;"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:535(para)
msgid "&gt;"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:538(para)
msgid "in"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:541(para)
msgid "&lt;="
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:544(para)
msgid "&gt;="
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:547(para)
msgid "not"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:550(para)
msgid "or"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:553(para)
msgid "and"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:557(code)
msgid "$free_ram_mb"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:560(code)
msgid "$free_disk_mb"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:563(code)
msgid "$total_usable_ram_mb"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:566(code)
msgid "$vcpus_total"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:569(code)
msgid "$vcpus_used"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:525(para)
msgid "The JsonFilter allows a user to construct a custom filter by passing a scheduler hint in JSON format. The following operators are supported:<placeholder-1/>The filter supports the following variables:<placeholder-2/>Using the <placeholder-3/> command-line tool, use the <literal>--hint</literal> flag:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:576(para) ./doc/config-reference/compute/section_compute-scheduler.xml:648(para) ./doc/config-reference/compute/section_compute-scheduler.xml:712(para)
msgid "With the API, use the <literal>os:scheduler_hints</literal> key:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:581(title)
msgid "MetricsFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:582(para)
msgid "Filters hosts based on metrics <literal>weight_setting</literal>. Only hosts with the available metrics are passed so that the metrics weigher will not fail due to these hosts."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:589(title)
msgid "NumInstancesFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:590(para)
msgid "Hosts that have more instances running than specified by the <option>max_instances_per_host</option> option are filtered out when this filter is in place."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:597(title)
msgid "PciPassthroughFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:598(para)
msgid "The filter schedules instances on a host if the host has devices that meet the device requests in the <literal>extra_specs</literal> attribute for the flavor."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:605(title)
msgid "RamFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:606(para)
msgid "Only schedules instances on hosts that have sufficient RAM available. If this filter is not set, the scheduler may over provision a host based on RAM (for example, the RAM allocated by virtual machine instances may exceed the physical RAM)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:611(para)
msgid "You can configure this filter to enable a fixed amount of RAM overcommitment by using the <literal>ram_allocation_ratio</literal> configuration option in <filename>nova.conf</filename>. The default setting is:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:618(para)
msgid "This setting enables 1.5GB instances to run on any compute node with 1GB of free RAM."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:622(title)
msgid "RetryFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:623(para)
msgid "Filters out hosts that have already been attempted for scheduling purposes. If the scheduler selects a host to respond to a service request, and the host fails to respond to the request, this filter prevents the scheduler from retrying that host for the service request."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:629(para)
msgid "This filter is only useful if the <literal>scheduler_max_attempts</literal> configuration option is set to a value greater than zero."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:635(title)
msgid "SameHostFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:636(para)
msgid "Schedules the instance on the same host as another instance in a set of instances. To take advantage of this filter, the requester must pass a scheduler hint, using <literal>same_host</literal> as the key and a list of instance UUIDs as the value. This filter is the opposite of the <literal>DifferentHostFilter</literal>. Using the <placeholder-1/> command-line tool, use the <literal>--hint</literal> flag:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:653(title)
msgid "ServerGroupAffinityFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:654(para)
msgid "The ServerGroupAffinityFilter ensures that an instance is scheduled on to a host from a set of group hosts. To take advantage of this filter, the requester must create a server group with an <literal>affinity</literal> policy, and pass a scheduler hint, using <literal>group</literal> as the key and the server group UUID as the value. Using the <placeholder-1/> command-line tool, use the <literal>--hint</literal> flag. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:665(replaceable) ./doc/config-reference/compute/section_compute-scheduler.xml:680(replaceable)
msgid "SERVER_GROUP_UUID"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:668(title)
msgid "ServerGroupAntiAffinityFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:669(para)
msgid "The ServerGroupAntiAffinityFilter ensures that each instance in a group is on a different host. To take advantage of this filter, the requester must create a server group with an <literal>anti-affinity</literal> policy, and pass a scheduler hint, using <literal>group</literal> as the key and the server group UUID as the value. Using the <placeholder-1/> command-line tool, use the <literal>--hint</literal> flag. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:683(title)
msgid "SimpleCIDRAffinityFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:684(para)
msgid "Schedules the instance based on host IP subnet range. To take advantage of this filter, the requester must specify a range of valid IP address in CIDR format, by passing two scheduler hints:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:690(literal)
msgid "build_near_host_ip"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:692(para)
msgid "The first IP address in the subnet (for example, <literal>192.168.1.1</literal>)"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:698(literal)
msgid "cidr"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:700(para)
msgid "The CIDR that corresponds to the subnet (for example, <literal>/24</literal>)"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:706(para)
msgid "Using the <placeholder-1/> command-line tool, use the <literal>--hint</literal> flag. For example, to specify the IP subnet <literal>192.168.1.1/24</literal>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:717(title)
msgid "TrustedFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:718(para)
msgid "Filters hosts based on their trust. Only passes hosts that meet the trust requirements specified in the instance properties."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:725(title)
msgid "TypeAffinityFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:726(para)
msgid "Dynamically limits hosts to one instance type. An instance can only be launched on a host, if no instance with different instances types are running on it, or if the host has no running instances at all."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:735(title)
msgid "Weights"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:737(para)
msgid "When resourcing instances, the filter scheduler filters and weights each host in the list of acceptable hosts. Each time the scheduler selects a host, it virtually consumes resources on it, and subsequent selections are adjusted accordingly. This process is useful when the customer asks for the same large amount of instances, because weight is computed for each requested instance."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:744(para)
msgid "All weights are normalized before being summed up; the host with the largest weight is given the highest priority."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:748(title)
msgid "Weighting hosts"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:757(para)
msgid "If cells are used, cells are weighted by the scheduler in the same manner as hosts."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:759(para)
msgid "Hosts and cells are weighted based on the following options in the <filename>/etc/nova/nova.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:763(caption)
msgid "Host weighting options"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:769(th) ./doc/config-reference/compute/section_compute-scheduler.xml:873(th)
msgid "Section"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:778(td)
msgid "By default, the scheduler spreads instances across all hosts evenly. Set the <placeholder-1/> option to a negative number if you prefer stacking instead of spreading. Use a floating-point value."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:787(literal)
msgid "scheduler_host_subset_size"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:788(td)
msgid "New instances are scheduled on a host that is chosen randomly from a subset of the N best hosts. This property defines the subset size from which a host is chosen. A value of 1 chooses the first host returned by the weighting functions. This value must be at least 1. A value less than 1 is ignored, and 1 is used instead. Use an integer value."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:801(literal)
msgid "nova.scheduler.weights.all_weighers"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:800(td)
msgid "Defaults to <placeholder-1/>, which selects the RamWeigher. Hosts are then weighted and sorted with the largest weight winning."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:808(literal)
msgid "weight_multiplier"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:809(td)
msgid "Multiplier for weighting metrics. Use a floating-point value."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:814(literal) ./doc/config-reference/compute/section_compute-scheduler.xml:849(option)
msgid "weight_setting"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:818(literal)
msgid "name1.value * 1.0 + name2.value * -1.0"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:815(td)
msgid "Determines how metrics are weighted. Use a comma-separated list of metricName=ratio. For example: \"name1=1.0, name2=-1.0\" results in: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:824(literal) ./doc/config-reference/compute/section_compute-scheduler.xml:847(option)
msgid "required"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:827(para)
msgid "TrueRaises an exception. To avoid the raised exception, you should use the scheduler filter MetricFilter to filter out hosts with unavailable metrics."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:836(para)
msgid "FalseTreated as a negative factor in the weighting process (uses the weight_of_unavailable option)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:825(para)
msgid "Specifies how to treat unavailable metrics:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:846(literal) ./doc/config-reference/compute/section_compute-scheduler.xml:851(option)
msgid "weight_of_unavailable"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:847(td)
msgid "If <placeholder-1/> is set to False, and any one of the metrics set by <placeholder-2/> is unavailable, the <placeholder-3/> value is returned to the scheduler."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:856(para) ./doc/config-reference/compute/section_compute-scheduler.xml:922(para)
msgid "For example:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:867(caption)
msgid "Cell weighting options"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:881(literal)
msgid "mute_weight_multiplier"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:882(td)
msgid "Multiplier to weight mute children (hosts which have not sent capacity or capacity updates for some time). Use a negative, floating-point value."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:889(literal)
msgid "mute_weight_value"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:890(td)
msgid "Weight value assigned to mute children. Use a positive, floating-point value with a maximum of '1.0'."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:896(literal)
msgid "offset_weight_multiplier"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:897(td)
msgid "Multiplier to weight cells, so you can specify a preferred cell. Use a floating point value."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:904(td)
msgid "By default, the scheduler spreads instances across all cells evenly. Set the <placeholder-1/> option to a negative number if you prefer stacking instead of spreading. Use a floating-point value."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:915(literal)
msgid "nova.cells.weights.all_weighers"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:914(td)
msgid "Defaults to <placeholder-1/>, which maps to all cell weighters included with Compute. Cells are then weighted and sorted with the largest weight winning."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:931(title)
msgid "Chance scheduler"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:933(para)
msgid "As an administrator, you work with the filter scheduler. However, the Compute service also uses the Chance Scheduler, <literal>nova.scheduler.chance.ChanceScheduler</literal>, which randomly selects from lists of filtered hosts."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml:944(para)
msgid "To customize the Compute scheduler, use the configuration option settings documented in <xref linkend=\"config_table_nova_scheduler\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-sample-configuration-files.xml:7(para)
msgid "Files in this section can be found in <systemitem>/etc/nova</systemitem>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-sample-configuration-files.xml:10(para)
msgid "The Compute service stores its API configuration settings in the <filename>api-paste.ini</filename> file."
msgstr ""
#: ./doc/config-reference/compute/section_compute-sample-configuration-files.xml:18(para)
msgid "The <filename>policy.json</filename> file defines additional access controls that apply to the Compute service."
msgstr ""
#: ./doc/config-reference/compute/section_compute-sample-configuration-files.xml:25(para)
msgid "The <filename>rootwrap.conf</filename> file defines configuration values used by the rootwrap script when the Compute service needs to escalate its privileges to those of the root user."
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:7(title)
msgid "Install XenServer"
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:8(para)
msgid "Before you can run OpenStack with XenServer, you must install the hypervisor on <link href=\"http://docs.vmd.citrix.com/XenServer/6.2.0/1.0/en_gb/installation.html#sys_requirements\"> an appropriate server </link> ."
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:18(para)
msgid "Xen is a type 1 hypervisor: When your server starts, Xen is the first software that runs. Consequently, you must install XenServer before you install the operating system where you want to run OpenStack code. You then install <systemitem class=\"service\">nova-compute</systemitem> into a dedicated virtual machine on the host."
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:33(link)
msgid "http://xenserver.org/open-source-virtualization-download.html"
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:27(para)
msgid "Use the following link to download XenServer's installation media: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:40(para)
msgid "When you install many servers, you might find it easier to perform <link href=\"http://docs.vmd.citrix.com/XenServer/6.2.0/1.0/en_gb/installation.html#pxe_boot_install\"> PXE boot installations </link> . You can also package any post-installation changes that you want to make to your XenServer by following the instructions of <link href=\"http://docs.vmd.citrix.com/XenServer/6.2.0/1.0/en_gb/supplemental_pack_ddk.html\"> creating your own XenServer supplemental pack </link> ."
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:55(para)
msgid "Make sure you use the EXT type of storage repository (SR). Features that require access to VHD files (such as copy on write, snapshot and migration) do not work when you use the LVM SR. Storage repository (SR) is a XAPI-specific term relating to the physical storage where virtual disks are stored."
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:62(para)
msgid "On the XenServer installation screen, choose the <guilabel>XenDesktop Optimized</guilabel> option. If you use an answer file, make sure you use <literal>srtype=\"ext\"</literal> in the <literal>installation</literal> tag of the answer file."
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:73(title)
msgid "Post-installation steps"
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:74(para)
msgid "The following steps need to be completed after the hypervisor's installation:"
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:80(para)
msgid "For resize and migrate functionality, enable password-less SSH authentication and set up the <literal>/images</literal> directory on dom0."
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:88(para)
msgid "Install the XAPI plug-ins."
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:93(para)
msgid "To support AMI type images, you must set up <literal>/boot/guest</literal> symlink/directory in dom0."
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:100(para)
msgid "Create a Paravirtualized virtual machine that can run <systemitem class=\"service\">nova-compute</systemitem>."
msgstr ""
#: ./doc/config-reference/compute/section_xen-install.xml:106(para)
msgid "Install and configure <systemitem class=\"service\">nova-compute</systemitem> in the above virtual machine."
msgstr ""
#: ./doc/config-reference/compute/section_xapi-install-plugins.xml:8(title)
msgid "Install XAPI plug-ins"
msgstr ""
#: ./doc/config-reference/compute/section_xapi-install-plugins.xml:9(para)
msgid "When you use a XAPI managed hypervisor, you can install a Python script (or any executable) on the host side, and execute that through XenAPI. These scripts are called plug-ins. The OpenStack related XAPI plug-ins live in OpenStack Compute's code repository. These plug-ins have to be copied to dom0's filesystem, to the appropriate directory, where XAPI can find them. It is important to ensure that the version of the plug-ins are in line with the OpenStack Compute installation you are using."
msgstr ""
#: ./doc/config-reference/compute/section_xapi-install-plugins.xml:19(para)
msgid "The plugins should typically be copied from the Nova installation running in the Compute's DomU, but if you want to download the latest version the following procedure can be used."
msgstr ""
#: ./doc/config-reference/compute/section_xapi-install-plugins.xml:25(title)
msgid "Manually installing the plug-ins"
msgstr ""
#: ./doc/config-reference/compute/section_xapi-install-plugins.xml:28(para)
msgid "Create temporary files/directories:"
msgstr ""
#: ./doc/config-reference/compute/section_xapi-install-plugins.xml:33(para)
msgid "Get the source from GitHub. The example assumes the master branch is used, and the XenServer host is accessible as xenserver. Match those parameters to your setup."
msgstr ""
#: ./doc/config-reference/compute/section_xapi-install-plugins.xml:43(para)
msgid "Copy the plug-ins to the hypervisor:"
msgstr ""
#: ./doc/config-reference/compute/section_xapi-install-plugins.xml:49(para)
msgid "Remove temporary files/directories:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:6(title)
msgid "Hypervisors"
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:7(para)
msgid "OpenStack Compute supports many hypervisors, which might make it difficult for you to choose one. Most installations use only one hypervisor. However, you can use <xref linkend=\"computefilter\"/> and <xref linkend=\"imagepropertiesfilter\"/> to schedule different hypervisors within the same installation. The following links help you choose a hypervisor. See <link href=\"http://wiki.openstack.org/HypervisorSupportMatrix\">http://wiki.openstack.org/HypervisorSupportMatrix</link> for a detailed list of features and support across the hypervisors."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:18(para)
msgid "The following hypervisors are supported:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:21(para)
msgid "<link href=\"http://www.linux-kvm.org/page/Main_Page\">KVM</link> - Kernel-based Virtual Machine. The virtual disk formats that it supports is inherited from QEMU since it uses a modified QEMU program to launch the virtual machine. The supported formats include raw images, the qcow2, and VMware formats."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:30(para)
msgid "<link href=\"http://lxc.sourceforge.net/\">LXC</link> - Linux Containers (through libvirt), use to run Linux-based virtual machines."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:35(para)
msgid "<link href=\"http://wiki.qemu.org/Manual\">QEMU</link> - Quick EMUlator, generally only used for development purposes."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:40(para)
msgid "<link href=\"http://user-mode-linux.sourceforge.net/\">UML</link> - User Mode Linux, generally only used for development purposes."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:46(para)
msgid "<link href=\"http://www.vmware.com/products/vsphere-hypervisor/support.html\">VMware vSphere</link> 4.1 update 1 and newer, runs VMware-based Linux and Windows images through a connection with a vCenter server or directly with an ESXi host."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:52(para)
msgid "<link href=\"http://www.xen.org\">Xen</link> - XenServer, Xen Cloud Platform (XCP), use to run Linux or Windows virtual machines. You must install the <systemitem class=\"service\">nova-compute</systemitem> service in a para-virtualized VM."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:60(para)
msgid "<link href=\"http://www.microsoft.com/en-us/server-cloud/windows-server/server-virtualization-features.aspx\"> Hyper-V</link> - Server virtualization with Microsoft's Hyper-V, use to run Windows, Linux, and FreeBSD virtual machines. Runs <systemitem class=\"service\">nova-compute</systemitem> natively on the Windows virtualization platform."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:69(para)
msgid "<link href=\"https://wiki.openstack.org/wiki/Baremetal\"> Bare Metal</link> - Not a hypervisor in the traditional sense, this driver provisions physical hardware through pluggable sub-drivers (for example, PXE for image deployment, and IPMI for power management)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:78(title)
msgid "Hypervisor configuration basics"
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:79(para)
msgid "The node where the <systemitem class=\"service\">nova-compute</systemitem> service is installed and operates on the same node that runs all of the virtual machines. This is referred to as the compute node in this guide."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:83(para)
msgid "By default, the selected hypervisor is KVM. To change to another hypervisor, change the <literal>virt_type</literal> option in the <literal>[libvirt]</literal> section of <filename>nova.conf</filename> and restart the <systemitem class=\"service\">nova-compute</systemitem> service."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:87(para)
msgid "Here are the general <filename>nova.conf</filename> options that are used to configure the compute node's hypervisor: <xref linkend=\"config_table_nova_hypervisor\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml:90(para)
msgid "Specific options for particular hypervisors can be found in the following sections."
msgstr ""
#: ./doc/config-reference/compute/section_compute-conductor.xml:7(title)
msgid "Conductor"
msgstr ""
#: ./doc/config-reference/compute/section_compute-conductor.xml:8(para)
msgid "The <systemitem class=\"service\">nova-conductor</systemitem> service enables OpenStack to function without compute nodes accessing the database. Conceptually, it implements a new layer on top of <systemitem class=\"service\">nova-compute</systemitem>. It should not be deployed on compute nodes, or else the security benefits of removing database access from <systemitem class=\"service\">nova-compute</systemitem> are negated. Just like other nova services such as <systemitem class=\"service\">nova-api</systemitem> or nova-scheduler, it can be scaled horizontally. You can run multiple instances of <systemitem class=\"service\">nova-conductor</systemitem> on different machines as needed for scaling purposes."
msgstr ""
#: ./doc/config-reference/compute/section_compute-conductor.xml:21(para)
msgid "The methods exposed by <systemitem class=\"service\">nova-conductor</systemitem> are relatively simple methods used by <systemitem class=\"service\">nova-compute</systemitem> to offload its database operations. Places where <systemitem class=\"service\">nova-compute</systemitem> previously performed database access are now talking to <systemitem class=\"service\">nova-conductor</systemitem>. However, we have plans in the medium to long term to move more and more of what is currently in <systemitem class=\"service\">nova-compute</systemitem> up to the <systemitem class=\"service\">nova-conductor</systemitem> layer. The Compute service will start to look like a less intelligent slave service to <systemitem class=\"service\">nova-conductor</systemitem>. The conductor service will implement long running complex operations, ensuring forward progress and graceful error handling. This will be especially beneficial for operations that cross multiple compute nodes, such as migrations or resizes."
msgstr ""
#: ./doc/config-reference/compute/section_compute-conductor.xml:40(para)
msgid "To customize the Conductor, use the configuration option settings documented in <xref linkend=\"config_table_nova_conductor\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:6(title)
msgid "Xen configuration reference"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:7(para)
msgid "The following section discusses some commonly changed options when using the XenAPI driver. The table below provides a complete reference of all configuration options available for configuring XAPI with OpenStack."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:13(para)
msgid "The recommended way to use XAPI with OpenStack is through the XenAPI driver. To enable the XenAPI driver, add the following configuration options to <filename>/etc/nova/nova.conf</filename> and restart <systemitem class=\"service\">OpenStack Compute</systemitem>:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:21(replaceable)
msgid "your_xenapi_management_ip_address"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:23(replaceable)
msgid "your_password"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:24(para)
msgid "These connection details are used by OpenStack Compute service to contact your hypervisor and are the same details you use to connect XenCenter, the XenServer management console, to your XenServer node."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:30(para)
msgid "The <literal>connection_url</literal> is generally the management network IP address of the XenServer."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:37(para)
msgid "The agent is a piece of software that runs on the instances, and communicates with OpenStack. In case of the XenAPI driver, the agent communicates with OpenStack through XenStore (see <link href=\"http://wiki.xen.org/wiki/XenStore\">the Xen Wiki</link> for more information on XenStore)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:44(para)
msgid "If you don't have the guest agent on your VMs, it takes a long time for OpenStack Compute to detect that the VM has successfully started. Generally a large timeout is required for Windows instances, but you may want to adjust: <literal>agent_version_timeout</literal> within the <literal>[xenserver]</literal> section."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:54(title)
msgid "VNC proxy address"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:55(para)
msgid "Assuming you are talking to XAPI through a management network, and XenServer is on the address: 10.10.1.34 specify the same address for the vnc proxy address: <literal>vncserver_proxyclient_address=<replaceable>10.10.1.34</replaceable></literal>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:63(title)
msgid "Storage"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:64(para)
msgid "You can specify which Storage Repository to use with nova by editing the following flag. To use the local-storage setup by the default installer: <placeholder-1/> Another alternative is to use the \"default\" storage (for example if you have attached NFS or any other shared storage): <placeholder-2/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:75(title)
msgid "XenAPI configuration reference"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:76(para)
msgid "To customize the XenAPI driver, use the configuration option settings documented in <xref linkend=\"config_table_nova_xen\"/>."
msgstr ""
#. Put one translator per line, in the form of NAME <EMAIL>, YEAR1, YEAR2
#: ./doc/config-reference/compute/section_compute-configure-xen.xml:0(None)
msgid "translator-credits"
msgstr ""